Planet Russell

,

Planet DebianThorsten Alteholz: My Debian Activities in April 2025

Debian LTS

This was my hundred-thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4145-1] expat security update of one CVE related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [DLA 4146-1] libxml2 security update to fix two CVEs related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Bookworm.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Unstable.

This month I did a week of FD duties. I also started to work on libxmltok. Adrian suggested to also check the CVEs that might affect the embedded version of expat. Unfortunately these are a bunch of CVEs to check and the month ended before the upload. I hope to finish this in May. Last but not least I continued to work on the second batch of fixes for suricata CVEs.

Debian ELTS

This month was the eighty-first ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1411-1] expat security update to fix one CVE in Stretch and Buster related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [ELA-1412-1] libxml2 security update to fix two CVEs in Jessie, Stretch and Buster related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.

This month I did a week of FD duties.
I also started to work on libxmltok. Normally I work on machines running Bullseye or Bookworm. As the Stretch version of libxmltok needs a debhelper version of 5, which is no longer supported on Bullseye, I had to create a separate Buster VM. Yes, Stretch is becoming old. As well as with LTS I need to also check the CVEs that might affect the embedded version of expat.
Last but not least I started to work on the second batch of fixes for suricata CVEs.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

misc

This month I uploaded new packages or new upstream or bugfix versions of:

bottlerocket was my first upload via debusine. It is a really cool tool and I can only recommend everybody to give it at least a try.
I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

FTP master

This month I accepted 307 and rejected 55 packages. The overall number of packages that got accepted was 308.

Worse Than FailureCodeSOD: Leap to the Past

Early in my career, I had the misfortune of doing a lot of Crystal Reports work. Crystal Reports is another one of those tools that lets non-developer, non-database savvy folks craft reports. Which, like so often happens, means that the users dig themselves incredible holes and need professional help to get back out, because at the end of the day, when the root problem is actually complicated, all the helpful GUI tools in the world can't solve it for you.

Michael was in a similar position as I was, but for Michael, there was a five alarm fire. It was the end of the month, and a bunch of monthly sales reports needed to be calculated. One of the big things management expected to see was a year-over-year delta on sales, and they got real cranky if the line didn't go up. If they couldn't even see the line, they went into a full on panic and assumed the sales team was floundering and the company was on the verge of collapse.

Unfortunately, the report was spitting out an error: "A day number must be between 1 and the number of days in the month."

Michael dug in, and found this "delight" inside of a function called one_year_ago:


Local StringVar yearStr  := Left({?ReportToDate}, 4);
Local StringVar monthStr := Mid({?ReportToDate}, 5, 2); 
Local StringVar dayStr   := Mid({?ReportToDate}, 7, 2);
Local StringVar hourStr  := Mid({?ReportToDate}, 9, 2);
Local StringVar minStr   := Mid({?ReportToDate}, 11, 2);
Local StringVar secStr   := Mid({?ReportToDate}, 13, 2);
Local NumberVar LastYear;

LastYear := ToNumber(YearStr) - 1;
YearStr := Replace (toText(LastYear),'.00' , '' );
YearStr := Replace (YearStr,',' , '' );

//DateTime(year, month, day, hour, min, sec);
//Year + Month + Day + Hour + min + sec;  // string value
DateTime(ToNumber(YearStr), ToNumber(MonthStr), ToNumber(dayStr), ToNumber(HourStr), ToNumber(MinStr),ToNumber(SecStr) );

We've all seen string munging in date handling before. That's not surprising. But what's notable about this one is the day on which it started failing. As stated, it was at the end of the month. But which month? February. Specifically, February 2024, a leap year. Since they do nothing to adjust the dayStr when constructing the date, they were attempting to construct a date for 29-FEB-2023, which is not a valid date.

Michael writes:

Yes, it's Crystal Reports, but surprisingly not having date manipulation functions isn't amongst it's many, many flaws. It's something I did in a past life isn't it??

The fix was easy enough- rewrite the function to actually use date handling. This made a simpler, basically one-line function, using Crystal's built in functions. That fixed this particular date handling bug, but there were plenty more places where this kind of hand-grown string munging happened, and plenty more opportunities for the report to fail.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsTsoukal’s Imperative

Author: Hillary Lyon The tall lean figure stood before the honeycombed wall, searching the triangular nooks until he located the scrolls for engineering marvels. Tsoukal pulled out the uppermost scroll and unrolled it on the polished stone slab behind him. He placed a slim rectangular weight on each end of the scroll to hold it […]

The post Tsoukal’s Imperative appeared first on 365tomorrows.

,

Krebs on SecurityPakistani Firm Shipped Fentanyl Analogs, Scams to US

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

LongNowLong Science in the Nevada Bristlecone Preserve

Long Science in the Nevada Bristlecone Preserve

It was at the invitation of The Long Now Foundation that I visited Mount Washington for the first time as a graduate student. Camping out the first night on the mountain with my kind and curious Long Now friends, I could sense that the experience was potentially transformative — that this place, and this community, had together created a kind of magic. The next morning, we packed up our caravan of cars and made our way up the mountain. I tracked the change in elevation out the car window by observing how the landscape changed from sagebrush to pinyon and juniper trees, to manzanita and mixed conifer, and finally to the ancient bristlecone pines. As we rose, the view of the expansive Great Basin landscape grew below us. It was then that I knew I had to be a part of the community stewarding this incredibly meaningful place. 

I’d entered graduate school following an earlier life working on long-term environmental monitoring networks across the U.S. and Latin America, and was attracted to the mountain’s established research network. My early experiences and relationships with other researchers had planted the seeds of appreciation for research which takes the long view of the world around us. Now, as a research professor at the Desert Research Institute (DRI) and a Long Now Research Fellow, I’m helping to launch a new scientific legacy in the Nevada Bristlecone Preserve. Of course, no scientific legacy is entirely new. My work compiling the first decade of observational climate data builds on decades of research in order to help carry it into the future — one link in a long line of scientists who have made my work possible. Science works much like an ecosystem, with different disciplines interweaving to help tell the story of the whole. Each project and scientist builds on the successes of the past. 

Unfortunately, the realities of short-term funding don’t often align with a long-term vision for research. Scientists hoping to answer big questions often find it challenging to identify funding that will support a project beyond two to three years, making it difficult to sustain the long-term research that helps illuminate changes in landscapes over time. This reality highlights the value of partnering with The Long Now Foundation. Their support is helping me carry valuable research into the future to understand how rare ecosystems in one of the least-monitored regions in the country are adapting to a warming world. 

The Nevada Bristlecone Preserve stretches across the high reaches of Mount Washington on the far eastern edge of Nevada. Growing where nearly nothing else can, the bristlecone pines (Pinus longaeva) that lend the preserve its name have a gnarled, twisted look to them, and wood so dense that it helps protect the tree from rot and disease. Trees in this grove are known to be nearly 5,000 years old, making them among the oldest living trees in the world. Because of the way trees radiate from their center as they grow, adding one ring essentially every year, scientists can gain glimpses of the past by studying their cores. Counting backward in time, we can visualize years with plentiful water and sunlight for growth as thicker, denser lines indicating a higher growth rate. Trees this old provide a nearly unprecedented time capsule of the climate that produced them, helping us to understand how today’s world differs from the one of our ancestors. 

This insight has always been valuable but is becoming even more critical as we face increasing temperatures outside the realm of what much of modern life has adapted to. My research aims to provide a nearly microscopic look at how the climate in the Great Basin is changing, from hour to hour and season to season. With scientific monitoring equipment positioned from the floor of the Great Basin’s Spring Valley up to the peak of Mount Washington, our project examines temperature fluctuations, atmospheric information, and snowpack insights across the region’s ecosystems by collecting data every 10 minutes. Named the Nevada Climate-Ecohydrological Assessment Network, or NevCAN, the research effort is now in its second decade. First established in part by my predecessors at DRI along with other colleagues from the Nevada System of Higher Education, the project offers a wealth of valuable climate monitoring information that can contribute to insights across scientific disciplines. 

Thanks to the foresight of the scientists who came before me, the data collected provides insight across ecosystems, winding from the valley floor’s sagebrush landscape to Mount Washington’s mid-elevation pinyon-juniper woodlands, to the higher elevation bristlecone pine grove, before winding down the mountain’s other side. The data from Mount Washington can be compared to a similar set of monitoring equipment set up across the Sheep Range just north of Las Vegas. Here, the lowest elevation stations sit in the Mojave Desert, among sprawling creosote-brush and Joshua trees, before climbing up into mid-elevation pinyon-juniper forests and high elevation ponderosa pine groves. 

Having over 10 years of data from the Nevada Bristlecone Preserve allows us to zoom in and out on the environmental processes that shape the mountain. Through this research, we’ve been able to ask questions that span timelines, from the 10-minute level of our data collection to the 5,000-year-old trees to the epochal age of the rocks and soil underlying the mountain. We can look at rapid environmental changes during sunrise and sunset or during the approach and onset of a quick thunderstorm. And we can zoom out to understand the climatology by looking at trends in changes in precipitation and temperature that impact the ecosystems. 

Scientists use data to identify stories in the world around us. Data can show us temperature swings of more than 50 degrees Fahrenheit in just 10 minutes with the onset of a dark and cold thunderstorm in the middle of August. We can observe the impacts of the nightly down-sloping winds that drive the coldest air to the bottom of the valley, helping us understand why the pinyon and juniper trees are growing at higher elevation, where it’s counterintuitively warmer. These first 10 years of data allow us to look at air temperature and precipitation trends, and the next 20 years of data will help us uncover some of the more long-term climatological changes occurring on the mountain. All the while, the ancient bristlecone pines have been collecting data for us over centuries — and millennia — in their tree rings. 

The type of research we’re doing with NevCAN facilitates scientific discovery that crosses the traditional boundaries of academic disciplines. The scientists who founded the program understood that the data collected on Mount Washington would be valuable to a range of researchers in different fields and intentionally brought these scientists together to create a project with foresight and long-term value to the scientific community. Building interdisciplinary teams to do this kind of science means that we can cross sectors to identify drivers of change. This mode of thinking acknowledges that the atmosphere impacts the weather, which drives rain, snow, drought, and fire risk. It acknowledges that as the snowpack melts or the monsoonal rains fall, the hydrologic response feeds streams, causes erosion, and regenerates groundwater. The atmospheric and hydrological cycles impact the ecosystem, driving elevational shifts in species, plant die-offs, or the generation of new growth after a fire. 

💡
To learn more about long-term science at Mount Washington, read Scotty Strachan's 02019 essay on Mountain Observatories and a Return to Environmental Long Science and former Long Now Director of Operations Laura Welcher's 02019 essay on The Long Now Foundation and a Great Basin Mountain Observatory for Long Science.

To really understand the mountain, we need everyone’s expertise: atmospheric scientists, hydrologists, ecologists, dendrochronologists, and even computer scientists and engineers to make sure we can get the data back to our collective offices to make meaning of it all. This kind of interdisciplinary science offers the opportunity to learn more about the intersection of scientific studies — a sometimes messy process that reflects the reality of how nature operates. 

Conducting long-term research like NevCAN is challenging for a number of reasons beyond finding sustainable funding, but the return is much greater than the sum of its parts. In order to create continuity between researchers over the years, the project team needs to identify future champions to pass the baton to, and systems that can preserve all the knowledge acquired. Over the years, the project’s technical knowledge, historical context, and stories of fire, wildlife, avalanches, and erosion continue to grow. Finding a cohesive team of dedicated people who are willing to be a single part of something bigger takes time, but the trust fostered within the group enables us to answer thorny and complex questions about the fundamental processes shaping our landscape.  

Being a Long Now Research Fellow funded by The Long Now Foundation has given me the privilege of being a steward of this mountain and of the data that facilitates this scientific discovery. This incredible opportunity allows me to be a part of something larger than myself and something that will endure beyond my tenure. It means that I get to be a mentee of some of the skilled stewards before me and a mentor to the next generation. In this way we are all connected to each other and to the mountain. We connect with each other by untangling difficult scientific questions; we connect with the mountain by spending long days traveling, camping, and experiencing the mountain from season to season; and we connect with the philosophy of The Long Now Foundation by fostering a deep appreciation for thinking on timescales that surpass human lifetimes. 


Long Science in the Nevada Bristlecone Preserve
Setting up Alicia Eggert’s art exhibition on the top of Mt Washington. Photo by Anne Heggli.

To learn more about Anne’s work, read A New Tool Can Help Protect California and Nevada Communities from Floods While Preserving Their Water Supply on DRI’s website. 

This essay was written in collaboration with Elyse DeFranco, DRI’s Lead Science Writer. 

Planet DebianJonathan Dowland: procmail versus exim filters

I’ve been using Procmail to filter mail for a long time. Reading Antoine’s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions).

My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language.

Requirements

A good first step is to look at what I'm using Procmail for:

  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories

  2. I file messages into different folders depending on the outcome of the above filters

  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules

  4. I move mailing list mail into folders, semi-automatically (see list filtering)

  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.

  6. I file a copy of some messages, the name of which is partly derived from the current calendar year

Exim Filters

I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters.

autolists

Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:

if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif

Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine).

killfile

An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:

if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif

I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement.

It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar.

external filters

With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails.

With Exim filters, we can use pipe to invoke an external program:

pipe "$home/mail/mailreaver.crm -u $home/mail/"

However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors.

Here's Exim's documentation on what happens when the external command fails:

Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.

That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message.

The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves.

removing subject tagging

Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters.

copy mail to archive folder

I can't see a way to derive a folder name from the calendar year.

next steps

Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do.

However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.


  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

Worse Than FailureEditor's Soapbox: AI: The Bad, the Worse, and the Ugly

…the average American, I think, has fewer than three friends. And the average person has demand for meaningfully more, I think it's like 15 friends or something, right?
- Mark Zuckerberg, presumably to one of his three friends

The link between man and machine with robots excited in the background

Since even the President of the United States is using ChatGPT to cheat on his homework and make bonkers social media posts these days, we need to have a talk about AI.

Right now, AI is being shoe-horned into everything, whether or not it makes sense. To me, it feels like the dotcom boom again. Millipedes.com! Fungus.net! Business plan? What business plan? Just secure the domain names and crank out some Super Bowl ads. We'll be RICH!

In fact, it's not just my feeling. The Large Language Model (LLM) OpenAI is being wildly overvalued and overhyped. It's hard to see how it will generate more revenue while its offerings remain underwhelming and unreliable in so many ways. Hallucination, bias, and other fatal flaws make it a non-starter for businesses like journalism that must have accurate output. Why would anyone convert to a paid plan? Even if there weren't an income problem—even if every customer became a paying customer—generative AI's exorbitant operational and environmental costs are poised to drown whatever revenue and funding they manage to scrape together.

Lest we think the problem is contained to OpenAPI or LLMs, there's not a single profitable AI venture out there. And it's largely not helping other companies to be more profitable, either.

A moment like this requires us to step back, take a deep breath. With sober curiosity, we gotta explore and understand AI's true strengths and weaknesses. More importantly, we have to figure out what we are and aren't willing to accept from AI, personally and as a society. We need thoughtful ethics and policies that protect people and the environment. We need strong laws to prevent the worst abuses. Plenty of us have already been victimized by the absence of such. For instance, one of my own short stories was used by Meta without permission to train their AI.

The Worst of AI
Sadly, it is all too easy to find appalling examples of all the ways generative AI is harming us. (For most of these, I'm not going to provide links because they don't deserve the clicks):

  • We all know that person who no longer seems to have a brain of their own because they keep asking OpenAI to do all of their thinking for them.
  • Deepfakes deliberately created to deceive people.
  • Cheating by students.
  • Cheating by giant corporations who are all too happy to ignore IP and copyright when it benefits them (Meta, ahem).
  • Piles and piles of creepy generated content on platforms like Youtube and TikTok that can be wildly inaccurate.
  • Scammy platforms like DataAnnotation, Mindrift, and Outlier that offer $20/hr or more for you to "train their AI." Instead, they simply gather your data and inputs and ghost the vast majority of applicants. I tried taking DataAnnotation's test for myself to see what would happen; after all, it would've been nice to have some supplemental income while job hunting. After several weeks, I still haven't heard back from them.
  • Applicant Tracking Systems (ATS) block job applications from ever reaching a human being for review. As my job search drags on, I feel like my life has been reduced to a tedious slog of keyword matching. Did I use the word "collaboration" somewhere in my resume? Pass. Did I use the word "teamwork" instead? Fail. Did I use the word "collaboration," but the AI failed to detect it, as regularly happens? Fail, fail, fail some more. Frustrated, I and no doubt countless others have been forced to turn to other AIs in hopes of defeating those AIs. While algorithms battle algorithms, companies and unemployed workers are all suffering.
  • Horrific, undeniable environmental destruction.
  • Brace yourself: a 14 year-old killed himself with the encouragement of the chatbot he'd fallen in love with. I can only imagine how many more young people have been harmed and are being actively harmed right now.

The Best of AI?
As AI began to show up everywhere, as seemingly everyone from Google to Apple demanded that I start using it, I had initially responded with aversion and resentment. I never bothered with it, I disabled it wherever I could. When people told me to use it, I waved them off. My life seemed no worse for it.

Alas, now AI completely saturates my days while job searching, bringing on even greater resentment. Thousands of open positions for AI-based startups! Thousands of companies demanding expertise in generative AI as if it's been around for decades. Well, gee, maybe my hatred and aversion is hurting my ability to get hired. Am I being a middle-aged Luddite here? Should I be learning more about AI (and putting it on my resume)? Wouldn't I be the bigger person to work past my aversion in order to learn about and highlight some of the ways we can use AI responsibly?

I tried. I really tried. To be honest, I simply haven't found a single positive generative AI use-case that justifies all the harm taking place.

So, What Do We Do?
Here are some thoughts: don't invest in generative AI or seek a job within the field, it's all gonna blow. Lobby your government to investigate abuses, protect people, and preserve the environment. Avoid AI usage and, if you're a writer like me, make clear that AI is not used in any part of your process. Gently encourage that one person you know to start thinking for themselves again.

Most critically of all: wherever AI must be used for the time being, ensure that one or more humans review the results.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Final Slice

Author: Colin Jeffrey On some mornings, around eleven, the postman will drop a letter or two into the mail slot. But many of these are not letters – they are coded messages disguised as bills or advertisements. Only I know their secrets. You see, I am a messenger of the gods. Just yesterday, I was […]

The post The Final Slice appeared first on 365tomorrows.

xkcdGlobe Safety

,

Planet DebianEnrico Zini: Python-like abspath for c++

Python's os.path.abspath or Path.absolute are great: you give them a path, which might not exist, and you get a path you can use regardless of the current directory. os.path.abspath will also normalize it, while Path will not by default because with Paths a normal form is less needed.

This is great to normalize input, regardless of if it's an existing file you're needing to open, or a new file you're needing to create.

In C++17, there is a filesystem library with methods with enticingly similar names, but which are almost, but not quite, totally unlike Python's abspath.

Because in my C++ code I need to normalize input, regardless of if it's an existing file I'm needing to open or a new file I'm needing to create, here's an apparently working Python-like abspath for C++ implemented on top of the std::filesystem library:

std::filesystem::path abspath(const std::filesystem::path& path)
{
    // weakly_canonical is defined as "the result of calling canonical() with a
    // path argument composed of the leading elements of p that exist (as
    // determined by status(p) or status(p, ec)), if any, followed by the
    // elements of p that do not exist."
    //
    // This means that if no lead components of the path exist then the
    // resulting path is not made absolute, and we need to work around that.
    if (!path.is_absolute())
        return abspath(std::filesystem::current_path() / path);

    // This is further and needlessly complicated because we need to work
    // around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    unsigned retry = 0;
    while (true)
    {
        std::error_code code;
        auto result = std::filesystem::weakly_canonical(path, code);
        if (!code)
        {
            // fprintf(stderr, "%s: ok in %u tries\n", path.c_str(), retry+1);
            return result;
        }

        if (code == std::errc::no_such_file_or_directory)
        {
            ++retry;
            if (retry > 50)
                throw std::system_error(code);
        }
        else
            throw std::system_error(code);
    }

    // Alternative implementation that however may not work on all platforms
    // since, formally, "[std::filesystem::absolute] Implementations are
    // encouraged to not consider p not existing to be an error", but they do
    // not mandate it, and if they did, they might still be affected by the
    // undefined behaviour outlined in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    //
    // return std::filesystem::absolute(path).lexically_normal();
}

I added it to my wobble code repository, which is the thin repository of components I use to ease my C++ systems programming.

Worse Than FailureCodeSOD: The Big Pictures

Loading times for web pages is one of the key metrics we like to tune. Users will put up with a lot if they feel like they application is responsive. So when Caivs was handed 20MB of PHP and told, "one of the key pages takes like 30-45 seconds to load. Figure out why," it was at least a clear goal.

Combing through that gigantic pile of code to try and understand what was happening was an uphill battle. Eventually, Caivs just decided to check the traffic logs while running the application. That highlighted a huge spike in traffic every time the page loaded, and that helped Caivs narrow down exactly where the problem was.

$first_image = '';
foreach($images as $the_image)
{ 
    $image = $the_image['url'];
 
  if(file_exists($config->base_url.'/uploads/'.$image))
  {
    if($first_image=='')
    {
      $first_image = $image;
    }
   
    $image_dimensions = '&w=648&h=432';
    $get_dimensions = getimagesize('http://old.datacenter.ip.address/'.$config->base_url.'/uploads/'.$image);
    if($get_dimensions[0] < $get_dimensions[1])
      $image_dimensions = '&h=432';

    echo '<li>'.$config->base_url.'/timthumb.php?src='.$config->base_url.'/uploads/'.$image.'&w=125&h=80&zc=1'), 'javascript:;', array('onclick'=>'$(\'.image_gallery .feature .image\').html(\''.$config->base_url.'/timthumb.php?src='.$config->base_url.'/uploads/'.$image.$image_dimensions.'&zc=1').'\');$(\'.image_gallery .feature .title\').show();$(\'.image_gallery .feature .title\').html("'.str_replace('"', '', $the_image['Image Description']).'");$(\'.image_gallery .bar ul li a\').removeClass(\'active\');$(\'.image_gallery .bar ul li\').removeClass(\'active\');$(this).addClass(\'active\');$(this).parents(\'li\').addClass(\'active\');sidebarHeight();curImg=$(this).attr(\'id\');translate()','id'=>$img_num)).'</li>';
    $img_num++;
  }
}

For every image they want to display in a gallery, they echo out a list item for it, which that part makes sense- more or less. The mix of PHP, JavaScript, JQuery, and HTML tags is ugly and awful and I hate it. But that's just a prosaic kind of awful, background radiation of looking at PHP code. Yes, it should be launched into the Kupier belt (it doesn't deserve the higher delta-V required to launch it into the sun), but that's not why we're here.

The cause of the long load times was in the lines above- where for each image, we getimagesize- a function which downloads the image and checks its stats, all so we can set $image_dimensions. Which, presumably, the server hosting the images uses the query string to resize the returned image.

All this is to check- if the height is greater than the width we force the height to be 432 pixels, otherwise we force the whole image to be 648x432 pixels.

Now, the server supplying those images had absolutely no caching, so that meant for every image request it needed to resize the image before sending. And for reasons which were unclear, if the requested aspect ratio were wildly different than the actual aspect ratio, it would also sometimes just refused to resize and return a gigantic original image file. But someone also had thought about the perils of badly behaved clients downloading too many images, so if a single host were requesting too many images, it would start throttling the responses.

When you add all this up, it meant that this PHP web application was getting throttled by its own file server, because it was requesting too many images, too quickly. Any reasonable user load hitting it would be viewed as an attempted denial of service attack on the file hosting backend.

Caivs was able to simply remove the check on filesize, and add a few CSS rules which ensured that files in the gallery wouldn't misbehave terribly. The performance problems went away- at least for that page of the application. Buried in that 20MB of PHP/HTML code, there were plenty more places where things could go wrong.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAccidents Happen

Author: Julian Miles, Staff Writer The control room is gleaming. Elias Medelsson looks about with a smile. The night watch clearly made a successful conversion of tedium to effort. He’ll drop a memo to his counterpart on the Benthusian side to express thanks. “Captain Medelsson.” Elias turns to find Siun Heplepara, the Benthusian he had […]

The post Accidents Happen appeared first on 365tomorrows.

,

Cryptogram Chinese AI Submersible

A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”

“Research rockets.” Sure.

Planet DebianRavi Dwivedi: A visit to Paris

After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024.

If you are wondering whether I needed another visa to cross the border into France— I didn’t! Further, they are both also EU members, which means you don’t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh.

I took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.

A train standing on a platform

TGV train I rode from Luxembourg to Paris

I planned the first day with my friend Joenio, whom I met upon arriving in Paris’ Gare de l’Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.

Pasta on a plate topped with Ricotta cheese

Pasta I had in Paris

Walking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.

A cycling track in Paris

A cycling track in Paris

After lunch, we returned to my hostel for my check-in at around 3 o’clock. Then, we went to the Luxembourg Museum (Musée du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used.

We reached the museum at around 4 o’clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.

A building wit trees on the left and right side of it and sky in the background. People can be seen in front of the building.

Luxembourg Museum

Afterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.

A building with a garden in front it and people sitting closer to us. Sky can be seen in the background.

A shot of the park near to the Luxembourg Museum

A building with a dome shaped structure on top. Closer to camera, roads can be seen. In the background is blue colored cloudy sky.

Pantheon, one of the attractions of Paris.

Then we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled Crêpe, a popular French dish. The shop was named Crêperie and had many varieties of Crêpe. I took the one with eggs and Emmental cheese. It was savory and delicious.

Photo with Joenio and Mari

Photo with Joenio and Mari

Notre Dame, another tourist attraction of Paris.

Notre Dame, another tourist attraction of Paris.

By the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named Île de la Cité and I didn’t even realize we are on an island.

At night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o’clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.

The Eiffel Tower lit with bright yellow

My photo with Eiffel Tower in the background

Next day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.

Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.Items at a bakery in Paris

Items at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.

Here are some random shots:

The Paris subway

The Paris subway

Inside a Paris metro train

Inside a Paris subway

A random building and road in Paris

A random building and road in Paris

A shot near Seine river

A shot near Seine river

A view of Seine river

A view of Seine river

On the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros.

I heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems in my previous Europe trip to Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way.

However, I didn’t have bad experiences in Paris, except for one instance in which I asked my hostel’s reception about my misplaced watch and the person at the reception asked me to be “polite” by being rude. She said, “Excuse me! You don’t know how to say Good Morning?”

Overall, I enjoyed my time in Paris and would like to thank Joenio and Mari for joining me. I would also like to thank Sophie, who gave me a map of Paris.

Let’s end this post here. I’ll meet you in the next one!

Credits: Thanks to contrapunctus for reviewing this post before publishing

Cryptogram Fake Student Fraud in Community Colleges

Reporting on the rise of fake students enrolling in community college courses:

The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.

The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community.

Slashdot thread.

Cryptogram Another Move in the Deepfake Creation/Detection Arms Race

Deepfakes are now mimicking heartbeats

In a nutshell

  • Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
  • The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
  • To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.

And the AI models will start mimicking that.

Planet DebianDaniel Lange: Make `apt` shut up about "modernize-sources" in Trixie

Apt in Trixie (Debian 13) has the annoying function to tell you "Notice: Some sources can be modernized. Run 'apt modernize-sources' to do so." ... every single time you run apt update. Not cool for logs and log monitoring.

And - of course - if you had the option to do this, you ... would have run the indicated apt modernize-sources command to convert your sources.list to "deb822 .sources format" files already. So an information message once or twice would have done.

Well, luckily you can help yourself:

apt -o APT::Get::Update::SourceListWarnings=false will keep apt shut up. This could go into an alias or your systems management tool / update script.

Alternatively add

# Keep apt shut about preferring the "deb822" sources file format
APT::Get::Update::SourceListWarnings "false";

to /etc/apt/apt.conf.d/10quellsourceformatwarnings .

This silences the notices about sources file formats (not only the deb822 one) system-wide. That way you can decide when you can / want to migrate to the new, more verbose, apt sources format yourself.

Worse Than FailureCodeSOD: A Double Date

Alice picked up a ticket about a broken date calculation in a React application, and dropped into the code to take a look. There, she found this:

export function calcYears(date) {
  return date && Math.floor((new Date() - new Date(date).getTime()) / 3.15576e10)
}

She stared at it for awhile, trying to understand what the hell this was doing, and why it was dividing by three billion. Also, why there was a && in there. But after staring at it for a few minutes, the sick logic of the code makes sense. getTime returns a timestamp in milliseconds. 3.15576e10 is the number of milliseconds in a year. So the Math.floor() expression just gets the difference between two dates as a number of years. The && is just a coalescing operator- the last truthy value gets returned, so if for some reason we can't calculate the number of years (because of bad input, perhaps?), we just return the original input date, because that's a brillant way to handle errors.

As bizarre as this code is, this isn't the code that was causing problems. It works just fine. So why did Alice get a ticket? She spent some more time puzzling over that, while reading through the code, only to discover that this calcYears function was used almost everywhere in the code- but in one spot, someone decided to write their own.

if (birthday) {
      let year = birthday?.split('-', 1)
      if (year[0] != '') {
        let years = new Date().getFullYear() - year[0]
        return years
      }
}

So, this function also works, and is maybe a bit more clear about what it's doing than the calcYears. But note the use of split- this assumes a lot about the input format of the date, and that assumption isn't always reliable. While calcYears still does unexpected things if you fail to give it good input, its accepted range of inputs is broader. Here, if we're not in a date format which starts with "YYYY-", this blows up.

After spending hours puzzling over this, Alice writes:

I HATE HOW NO ONE KNOWS HOW TO CODE

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianSergio Talens-Oliag: Argo CD Usage Examples

As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post.

For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid.

The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered.

On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:

  1. Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).
  2. Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.
  3. Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.

Creating the test project for argocd-autopilot

As we did our installation using argocd-autopilot we will use its structure to manage the applications.

The first thing to do is to create a project (we will name it test) as follows:

❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'

Now that the test project is available we will use it on our argocd-autopilot invocations when creating applications.

Installing the reloader controller

To add the reloader application to the test project as a kustomize application and deploy it on the tools namespace with argocd-autopilot we do the following:

❯ argocd-autopilot app create reloader \
    --app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader

That command creates four files on the argocd repository:

  1. One to create the tools namespace:

    bootstrap/cluster-resources/in-cluster/tools-ns.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-options: Prune=false
      creationTimestamp: null
      name: tools
    spec: {}
    status: {}
  2. Another to include the reloader base application from the upstream repository:

    apps/reloader/base/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
  3. The kustomization.yaml file for the test project (by default it includes the same configuration used on the base definition, but we could make other changes if needed):

    apps/reloader/overlays/test/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: tools
    resources:
    - ../../base
  4. The config.json file used to define the application on argocd for the test project (it points to the folder that includes the previous kustomization.yaml file):

    apps/reloader/overlays/test/config.json
    {
      "appName": "reloader",
      "userGivenName": "reloader",
      "destNamespace": "tools",
      "destServer": "https://kubernetes.default.svc",
      "srcPath": "apps/reloader/overlays/test",
      "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
      "srcTargetRevision": "",
      "labels": null,
      "annotations": null
    }

We can check that the application is working using the argocd command line application:

❯ argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy

KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
  └─Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy

Adding flags to the reloader server

The runtime configuration flags for the reloader server are described on the project README.md file, in our case we want to adjust three values:

  • We want to enable the option to reload a workload when a ConfigMap or Secret is created,
  • We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
  • We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.

To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the text added is the following:

patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

After committing and pushing the updated file the system launches the application with the new options.

The dummyhttp application

To do a quick test we are going to deploy the dummyhttp web server using an image generated using the following Dockerfile:

# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>

# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX

# Latest tested version of alpine
FROM ${OCI_REGISTRY_PREFIX}alpine:3.21.3

# Tool versions
ARG DUMMYHTTP_VERS=1.1.1

# Download binary
RUN ARCH="$(apk --print-arch)" && \
  VERS="$DUMMYHTTP_VERS" && \
  URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
  wget "$URL" -O "/tmp/dummyhttp" && \
  install /tmp/dummyhttp /usr/local/bin && \
  rm -f /tmp/dummyhttp

# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]

The kustomize base application is available on a monorepo that contains the following files:

  1. A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand environment variables passed to the pod (the definition includes two optional variables, one taken from a ConfigMap and another one from a Secret):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dummyhttp
      labels:
        app: dummyhttp
    spec:
      selector:
        matchLabels:
          app: dummyhttp
      template:
        metadata:
          labels:
            app: dummyhttp
        spec:
          containers:
          - name: dummyhttp
            image: forgejo.mixinet.net/oci/dummyhttp:1.0.0
            command: [ "/bin/sh", "-c" ]
            args:
            - 'eval dummyhttp -b \"{\\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\"}\"'
            ports:
            - containerPort: 8080
            env:
            - name: CM_VAR
              valueFrom:
                configMapKeyRef:
                  name: dummyhttp-configmap
                  key: CM_VAR
                  optional: true
            - name: SECRET_VAR
              valueFrom:
                secretKeyRef:
                  name: dummyhttp-secret
                  key: SECRET_VAR
                  optional: true
  2. A Service that publishes the previous Deployment (the only relevant thing to mention is that the web server uses the port 8080 by default):

    apiVersion: v1
    kind: Service
    metadata:
      name: dummyhttp
    spec:
      selector:
        app: dummyhttp
      ports:
      - name: http
        port: 80
        targetPort: 8080
  3. An Ingress definition to allow access to the application from the outside:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dummyhttp
      annotations:
        traefik.ingress.kubernetes.io/router.tls: "true"
    spec:
      rules:
        - host: dummyhttp.localhost.mixinet.net
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: dummyhttp
                    port:
                      number: 80
  4. And the kustomization.yaml file that includes the previous files:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    resources:
    - deployment.yaml
    - service.yaml
    - ingress.yaml

Deploying the dummyhttp application from argocd

We could create the dummyhttp application using the argocd-autopilot command as we’ve done on the reloader case, but we are going to do it manually to show how simple it is.

First we’ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous repository:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0

As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml file to include the previous file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base

And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the ApplicationSet defined by argocd-autopilot expects:

{
  "appName": "dummyhttp",
  "userGivenName": "dummyhttp",
  "destNamespace": "default",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/dummyhttp/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we have the three files we commit and push the changes and argocd deploys the application; we can check that things are working using curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/ | jq -M .
{
  "c": "",
  "s": ""
}

Patching the application

Now we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:

  • One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to avoid touching the deployments, as that can generate issues with argocd).
  • Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).

The file diff is as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch: |-
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: "true"
+        reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch: |-
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net

After committing and pushing the changes we can use the argocd cli to check the status of the application:

❯ argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy

KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc           Healthy
  └─Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl

As we can see, the Deployment and Ingress where updated, but the Service is unchanged.

To validate that the ingress is using the new hostname we can use curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443/
{"c": "", "s": ""}

Adding a ConfigMap

Now that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or updated we are ready to add one file and see how the system reacts.

We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the configMapGenerator as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR="Default Test Value"
+  behavior: create
+  options:
+    disableNameSuffixHash: true
 patches:
 # Add reloader annotations
 - target:

After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and started again and the curl output includes the new value:

❯ kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h

NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}

Using helm with argocd-autopilot

Right now there is no direct support in argocd-autopilot to manage applications using helm (see the issue #38 on the project), but we want to use a chart in our next example.

There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to use kustomize applications that call helm as described here.

The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:

bootstrap/argo-cd/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  # Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
  - kustomize.buildOptions="--enable-helm"
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://forgejo.mixinet.net/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
  literals:
  - "server.insecure=true"
  name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml

On the following section we will explain how the application is defined to make things work.

Installing the sealed-secrets controller

To manage secrets in our cluster we are going to use the sealed-secrets controller and to install it we are going to use its chart.

As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the chart, but we are going to create the files manually, as we are not going import the base kustomization files from a remote repository.

As there is no clear way to override helm Chart values using overlays we are going to use a generator to create the helm configuration from an external resource and include it from our overlays (the idea has been taken from this repository, which was referenced from a comment on the kustomize issue #38 mentioned earlier).

The sealed-secrets application

We have created the following files and folders manually:

apps/sealed-secrets/
├── helm
│   ├── chart.yaml
│   └── kustomization.yaml
└── overlays
    └── test
        ├── config.json
        ├── kustomization.yaml
        └── values.yaml

The helm folder contains the generator template that will be included from our overlays.

The kustomization.yaml includes the chart.yaml as a resource:

apps/sealed-secrets/helm/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml

And the chart.yaml file defines the HelmChartInflationGenerator:

apps/sealed-secrets/helm/chart.yaml
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
  name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
  fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml

For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the valuesInline key because we want to use those settings on all the projects (they are the values expected by the kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it).

We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need to have a values.yaml file on each overlay folder (if we don’t want to overwrite any values for a project we can create an empty file, but it has to exist).

Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to install the application.

The kustomization.yaml file contents are:

apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm

The values.yaml file enables the ingress for the application and adjusts its hostname:

apps/sealed-secrets/overlays/test/values.yaml
ingress:
  enabled: true
  hostname: test-sealed-secrets.lo.mixinet.net

And the config.json file is similar to the ones used with the other applications we have installed:

apps/sealed-secrets/overlays/test/config.json
{
  "appName": "sealed-secrets",
  "userGivenName": "sealed-secrets",
  "destNamespace": "kube-system",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/sealed-secrets/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using curl to get the public certificate used by it:

❯ curl -s https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----

The dummyhttp-secret

To create sealed secrets we need to install the kubeseal tool:

❯ arkade get kubeseal

Now we create a local version of the dummyhttp-secret that contains some value on the SECRET_VAR key (the easiest way for doing it is to use kubectl):

❯ echo -n "Boo" | kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
    >/tmp/dummyhttp-secret.yaml

The secret definition in yaml format is:

apiVersion: v1
data:
  SECRET_VAR: Qm9v
kind: Secret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret

To create a sealed version using the kubeseal tool we can do the following:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml

That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects.

If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem

Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.

The sealed version of the secret looks like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
  namespace: default
spec:
  encryptedData:
    SECRET_VAR: [...]
  template:
    metadata:
      creationTimestamp: null
      name: dummyhttp-secret
      namespace: default

This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application), but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and SealedSecrets in the default namespace:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.

Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+- dummyhttp-sealed-secret.yaml
 # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap

Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:

❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s

If we check the command output we can see the new value of the secret:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": "Boo"
}

Using sealed-secrets in production clusters

If you plan to use sealed-secrets look into its documentation to understand how it manages the private keys, how to backup things and keep in mind that, as the documentation explains, you can rotate your sealed version of the secrets, but that doesn’t change the actual secrets.

If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).

Final remarks

On this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm charts inside kustomize applications and how to install and use the sealed-secrets controller.

It has been interesting and I’ve learnt a lot about argocd in the process, but I believe that if I ever want to use it in production I will also review the native helm support in argocd using a separate repository to manage the applications, at least to be able to compare it to the model explained here.

365 TomorrowsEarth Day

Author: Chelsea Utecht Today is the day our masters treat us to sweet snacks of expensive corn and sing a song to celebrate their love for us – “Happy Earth Day to you! Happy Earth Day to you! Happy Earth day, our humans!” – because today the orbit aligns so that we can see a […]

The post Earth Day appeared first on 365tomorrows.

xkcdAbout 20 Pounds

,

Planet DebianDirk Eddelbuettel: #47: r2u at its Third Birthday

Welcome to post 47 in the $R^4 series!

r2u provides Ubuntu binaries for all CRAN packages for the R system. It started three years ago, and offers for Linux users on Ubuntu what windows and macOS users already experience: fast, easy and reliable installation of binary packages. But by integrating with the system package manager (which is something that cannot be done on those other operating systems) we can fully and completely integrate it with underlying system. External libraries are resolved as shared libraries and handled by the system package manager. This offers fully automatic installation both at the initial installation and all subsequent upgrades. R users just say, e.g., install.packages("sf") and spatial libraries proj, gdal, geotiff (as well as several others) are automatically installed as dependencies in the correct versions. And they remain installed along with sf as the system manager now knows of the dependency.

Work on r2u began as a quick weekend experiment in March 2022, and by May 4 a first release was marked in the NEWS file after a few brave alpha testers kicked tires quite happily. This makes today the third anniversary of that first release, and marks a good time to review where we are. This short post does this, and stresses three aspects: overall usage, current versions, and new developments.

Steadily Growing Usage at 42 Million Packages Shipped

r2u ships from two sites. Its main repository is at the University of Illinois campus providing ample and heavily redundant bandwidth. We remain very grateful for the sponsorship from Atlas. It also still ships from my own server though that may be discontinued or could be spotty as it is on retail fiber connectivity. As we have access to the both sets of server logs, we can tabulate and chart usage. As of yesterday, total downloads were north of 42 million with current weekly averages around 500 thousand. These are quite staggering numbers for what started as a small hobby project, and are quite humbling.

Usage is driven by deployment in continuous integration (as for example the Ubuntu-use at GitHub makes this both an easy and obvious choice), cloud computing (as it is easy to spin up Ubuntu instances, it is as easy to add r2u via four simple commands or one short script), explorative use (for example on Google Colab) or of course in general laptop, desktop, or server settings.

Current Versions

Since r2u began, we added two Ubuntu LTS releases, three annual R releases as well as multiple BioConductor releases. BioConductor support is on a ‘best-efforts’ basis motivated primarily to support the CRAN packages having dependencies. It has grown to around 500 packages and includes the top-250 by usage.

Right now, current versions R 4.5.0 and BioConductor 3.21, both released last month, are supported.

New Development: arm64

A recent change is the support of the arm64 platform. As discussed in the introductory post, it is a popular and increasingly common CPU choice seen anywhere from the Raspberry Pi 5 and it Cortex CPU to in-house cloud computing platforms (called, respectively, Graviton at AWS and Axiom at GCS), general server use via Ampere CPUs, Cortex-based laptops that start to appears and last but not least on the popular M1 to M4-based macOS machines. (For macOS, one key appeal is in use of ‘lighterweight’ Docker use as these M1 to M4 cpus can run arm64-based containers without a translation layer making it an attractive choice.)

This is currently supported only for the ‘noble’ aka 24.04 release. GitHub Actions, where we compile these packages, now also supports ‘jammy’ aka 22.04 but it may not be worth it to expand there as the current ‘latest’ release is available. We have not yet added BioConductor support but may do so. Drop us a line (maybe via an issue) if this of interest.

With the provision of arm64 binaries, we also started to make heavier use of GitHub Actions. The BioConductor 3.21 release binaries were also created there. This makes the provision more transparent as well as the configuration repo as well as the two builder repos (arm64, bioc) are public, as is of course the main r2u repo.

Summing Up

This short post summarised the current state of r2u along with some recent news. If you are curious, head over to the r2u site and try it, for example in a rocker/r2u container.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianColin Watson: Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

Cryptogram US as a Surveillance State

Two essays were just published on DOGE’s data collection and aggregation, and how it ends with a modern surveillance state.

It’s good to see this finally being talked about.

EDITED TO ADD (5/3): Here’s a free link to that first essay.

Planet DebianRuss Allbery: Review: The Book That Held Her Heart

Review: The Book That Held Her Heart, by Mark Lawrence

Series: Library Trilogy #3
Publisher: ACE
Copyright: 2025
ISBN: 0-593-43799-3
Format: Kindle
Pages: 367

The Book That Held Her Heart is the third and final book of the Library fantasy trilogy and a direct sequel to The Book That Broke the World. Lawrence provides a much-needed summary of the previous volumes at the start of this book (thank you to every author who does this!), but I was still struggling a bit with the blizzard of character names. I recommend reading this series entry in relatively close proximity to the other two.

At the end of the previous book, and following some rather horrific violence, the cast split into four groups. Three of those are pursuing different resolutions to the moral problem of the Library's existence. The fourth group opens the book still stuck with the series villains, who were responsible for the over-the-top morality that undermined my enjoyment of The Book That Broke the World. Lawrence follows all four groups in interwoven chapters, maintaining that complex structure through most of this book. I thought this was a questionable structural decision that made this book feel choppy, disconnected, and unnecessarily confusing.

The larger problem, though, is that this is the payoff book, the book where we find out if Lawrence is equal to the tricky ethical questions he's raised and the world-building masterpiece that The Book That Wouldn't Burn kicked off. The answer, unfortunately, is "not really." This is not a total failure; there are some excellent set pieces and world-building twists, and the characters remain likable and enjoyable to read about (although the regrettable sidelining of Livira continues). But the grand finale is weirdly conservative and not particularly grand, and Lawrence's answer to the moral questions he raised is cliched and wholly unsatisfying.

I was really hoping Lawrence was going somewhere more interesting than "Nazis bad." I am entirely sympathetic to this moral position, but so is every other likely reader of this series, and we all know how that story goes. What a waste of a compelling setup.

Sadly, "Nazis bad" isn't even a metaphor for the black-and-white morality that Lawrence first introduced at the end of the previous book. It's a literal description of the main moral thrust of this book. Lawrence introduces yet another new character and timeline so that he can write about thinly-disguised Nazis persecuting even more thinly-disguised Jews, and this conflict is roughly half this book. It's also integral to the ending, which uses obvious, stock secular sainthood as a sort of trump card to resolve ideological conflicts at the heart of the series.

This is one of the things I was worried about after I read the short stories that Lawrence published between the volumes of this series. All of them were thuddingly trite, which did not make me optimistic that Lawrence would find a sufficiently interesting answer to his moral trilemma to satisfy the high expectations created by the build-up. That is, I am sad to report, precisely the failure mode of this book. The resolution of the moral question of the series is arguably radical within the context of the prior world-building, but in a way that effectively reduces it to the boring, small-c conservative bromides of everyday reality. This is precisely the opposite of why I read fantasy, and I did not find Lawrence's arguments for it at all convincing. Neither, I think, did Lawrence, given that the critical debate takes place off camera so that he could avoid having to present the argument.

This is, unfortunately, another series where the author's reach exceeded their grasp. The world-building of The Book That Wouldn't Burn is a masterpiece that created one of the most original and compelling settings that I have read in fantasy for a long time, but unfortunately Lawrence did not have an equally original plan for how to use the setting. This is a common problem and I'm not going to judge it too harshly; it's much harder to end a series than it is to start one. I thought the occasional flashes of brilliance was worth the journey, and they continue into this book with some elaborations on the Library's mythic structure that are going to stick in my mind.

You can sense the story slipping away from the hoped-for conclusion as you read, though. The story shifts more and more away from the setting and the world-building and towards character stories, and while Lawrence's characters are fine, they're not that novel. I am happy to read about Clovis and Arpix, but I can read variations of that story in a lot of places. Livira never recovers her dynamism and drive from the first book, and there is much less beneath Yute's thoughtful calm than I was hoping to find. I think Lawrence knows that the story was not entirely working because the narrative voice becomes more strident as the morality becomes less interesting. I know of only one fantasy author who can make this type of overbearing and freighted narrative style work, and Lawrence is sadly not Guy Gavriel Kay.

This is not a bad book. It is an enjoyable adventure story on its own terms, with some moments of real beauty and awe and a handful of memorable characters, somewhat undermined by a painfully obvious and unoriginal moral frame. It's only a disappointment in the context of what came before it, and it is far from the first series conclusion that doesn't quite live up to the earlier volumes. I'm glad that I read it, and the series as a whole, and I do appreciate that Lawrence brought the whole series to a firm and at least somewhat satisfying conclusion in the promised number of volumes. But I do wish the series as a whole had been as special as the first book.

Rating: 6 out of 10

365 TomorrowsThe Robot

Author: Kelleigh Cram I told my daughter I didn’t want the dang thing but you know kids; they understand technology and we are just senile. The robot folds my clothes, which I must admit is nice. The shirts are stacked so precisely I just take whichever one is on top, not wanting to mess up […]

The post The Robot appeared first on 365tomorrows.

,

David BrinThis class war has no memory – and that could kill us

Nathan Gardels – editor of Noema magazine – offers in the latest issue a glimpse of the latest philosopher with a theory of history, or historiography. One that I'll briefly critique soon, as it relates much to today's topic. But first...

In a previous issue, Gardels offered valuable and wise insights about America’s rising cultural divide, leading to what seems to be a rancorous illiberal democracy.  

Any glance at the recent electoral stats shows that while race & gender remain important issues, they did not affect outcomes as much as a deepening polar divide between America’s social castes, especially the less-educated vs. more-educated. 


Although he does not refer directly to Marx, he is talking about a schism that my parents understood... between advanced proletariate and ignorant lumpen-proletariate.


Hey, this is not another of my finger-wagging lectures, urging you all to at least understand some basic patterns that the WWII generation knew very well, when they designed the modern world. Still, you could start with Nathan's essay...


...though alas, in focusing on that divide, I'm afraid Nathan accepts an insidious premise. Recall that there is a third party to this neo-Marxian class struggle, that so many describe as simply polar. 

 


== Start by stepping way back == 


There’s a big context, rooted in basic biology. Nearly all species have their social patterns warped by male reproductive strategies, mostly by males applying power against competing males.  


(Regretable? Sure. Then let's over-rule Nature by becoming better. But that starts by looking at and understanding evolution.)


Among humans, this manifested for much more than 6000 years as feudal dominance by local gangs, then aristocracies, and then kings intent upon one central goal -- to ensure that their sons would inherit power.


Looking across all that time, till the near-present, I invite you to find any exceptions among societies with agriculture. That is, other than Periclean Athens and (maybe) da Vinci's Florence. This pattern - dominating nearly all continents and 99% of cultures across those 60 centuries is a dismal litany of malgovernance called 'history'. 

Alas, large-scale history is never (and I mean never) discussed these days, even though variants of feudalism make up the entire backdrop -- the default human condition -- against which our recent Enlightenment has been a miraculous - but always threatened - experimental alternative. 


The secret sauce of the Enlightenment, described by Adam Smith and established (at first crudely) by the U.S. Founders, consists of flattening the caste-order. Breaking up power into rival elites -- siccing them against each other in fair competition, and basing success far less on inheritance than other traits. That, plus the empowerment of new players... an educated meritocracy in science, commerce, civil service and even the military. 


This achievement did augment with each generation – way too slowly, but incrementally – till the World War II Greatest Generation’s GI Bill and massive universities and then desegregation took it skyward, making America truly the titan of all ages and eras.


Karl Marx - whose past-oriented appraisals of class conflict were brilliant - proved to be a bitter, unimaginative dope when it came to projecting forward the rise of an educated middle class... 


…which was the great innovation of the Roosevelteans, inviting the working classes into a growing and thriving middle class..

... an unexpected move that consigned Marx to the dustbin for 80 years... 

... till his recent resurrection all around the globe, for reasons given below.



== There are three classes tussling here, not two ==


Which brings us to where Nathan Gardels’s missive is just plain wrong, alas. Accepting a line of propaganda that is now universally pervasive – he asserts that two – and only two – social classes are involved in a vast – socially antagonistic and polar struggle.


Are the lower middle classes (lumpenproletariat) currently at war against 'snooty fact elites'?  Sure, they are!  But so many post-mortems of the recent U.S. election blame the fact-professionals themselves, for behaving in patronizing ways toward working stiffs. 


Meanwhile, such commentaries leave out entirely any mention of a 3rd set of players...


... the oligarchs, hedge lords, inheritance brats, sheiks and “ex”-commissars who have united in common cause. Those who stand most to benefit from dissonance within the bourgeoisie! 


Elites who have been the chief beneficiaries of the last 40 years of 'supply side' and other tax grifts. Whose wealth disparities long ago surpassed those preceding the French Revolution. Many of whom are building lavish ‘prepper bunkers.' And who now see just one power center blocking their path to complete restoration of the default human system – feudal rule by inherited privilege. 


(I portrayed this - in detail - in Existence.)


That obstacle to feudal restoration? The fact professionals, whose use of science, plus rule-of-law and universities – plus uplift of poor children - keeps the social flatness prescription of Adam Smith alive. 


And hence, those elites lavishly subsidize a world campaign to rile up lumpenprol resentment against science, law, medicine, civil servants... and yes, now the FBI and Intel and military officer corps. 


A campaign that's been so successful that the core fact of this recent election – the way all of the adults in the first Trump Administration have denounced him – is portrayed as a feature by today’s Republicans, rather than a fault. And yes, that is why none of the new Trump Appointees will ever be adults-in-the-room.



== The ultimate, ironic revival of Marx, by those who should fear him most ==


Seriously. You can't see this incitement campaign in every evening's tirades, on Fox? Or spuming across social media, where ‘drinking the tears of know-it-alls’ is the common MAGA victory howl? 


A hate campaign against snobby professionals that is vastly more intensive than any snide references to race or gender? 


Try actually counting the minutes spent exploiting the natural American SoA reflex (Suspicion of Authority) that I discuss in Vivid Tomorrows.  A reflex which could become dangerous to oligarchs, if ever it turned on them! 


And hence it must be diverted into rage and all-out war vs. all fact-using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.


To be clear, there are some professionals who have behaved stupidly, looking down their noses at the lower middle class.


Just as there are poor folks who appreciate their own university-educated kids, instead of resenting them. 


And yes, there are scions of inherited wealth or billionaires (we know more than a couple!) who are smart and decent enough to side with an Enlightenment that's been very good to them.


Alas, the agitprop campaign that I described here has been brilliantly successful, including massively popular cultural works extolling feudalism as the natural human forms of governance. (e.g. Tolkien, Dune, Star Wars, Game of Thrones... and do you seriously need more examples in order to realize that it's deliberate?)


They aren’t wrong! Feudalism is the ‘natural’ form of human governance. 


In fact, its near universality may be a top theory to explain the Fermi Paradox! 


… A trap/filter that prevents any race from rising to the stars.



== Would I rather not have been right? ==


One of you pointed out "Paul Krugman's post today echoes Dr B's warnings about  MAGA vs Science.


"But why do our new rulers want to destroy science in America? Sadly, the answer is obvious: Science has a tendency to tell you things you may not want to hear. ....
And one thing we know about MAGA types is that they are determined to hold on to their prejudices. If science conflicts with those prejudices, they don’t want to know, and they don’t want anyone else to know either."


The smartest current acolyte of Hari Seldon. Except maybe for Robert Reich. And still, they don't see the big picture.



== Stop giving the first-estate a free pass ==


And so, I conclude. 


Whenever you find yourself discussing class war between the lower proletariats and snooty bourgeoisie, remember that the nomenclature – so strange and archaic-sounding, today – was quite familiar to our parents and grandparents.  


Moreover, it included a third caste! The almost perpetual winners, across 600 decades. The bane on fair competition that was diagnosed by both Adam Smith and Karl Marx. And one that's deeply suicidal, as today's moguls - masturbating to the chants of flatterers - seem determined to repeat every mistake that led to tumbrels and guillotines.


With some exceptions – those few who are truly noble of mind and heart – they are right now busily resurrecting every Marxian scenario from the grave… 

 … or from torpor where they had been cast by the Roosevelteans. 


And the rich fools are doing so by fomenting longstanding cultural grudges for – or against – modernity. The same modernity that gave them everything they have and that laid all of their golden eggs.


If anything proves the inherent stupidity of that caste – (most of them) - it is their ill-education about Marx! And what he will mean to new generations, if the Enlightenment cannot be recharged and restored enough to put old Karl back to sleep.



Planet DebianRussell Coker: Silly Job Titles

Many years ago I was on a programming project porting code from OS/2 1.x to NT. When I was there they suddenly decided to make a database of all people and get job titles for everyone – apparently the position description used when advertising the jobs wasn’t sufficient. When I got given a clipboard with a form to write my details I looked at what everyone else had done, It was a heap of ridiculous propaganda with everyone trying to put in synonyms for “senior” or “skillful” and listing things that they were allegedly in charge of. There were even some people trying to create impressive titles for their managers to try and suck up.

I chose the job title “coder” as the shortest and most accurate description of what I was doing. I had to confirm that yes I really did want to put a one word title and not a paragraph of frippery. Part of my intent was to mock the ridiculously long job titles used by others but I don’t think anyone realised that.

I was reminded of that company when watching a video of a Trump cabinet meeting where everyone had to tell Trump how great he is. I think that a programmer who wants to be known as a “Principal Solutions Architect of Advanced Algorithmic Systems and Digital Innovation Strategy” (suggested by ChatGPT because I can’t write such ridiculous things) is showing a Trump level of lack of self esteem.

When job titles are discussed there’s always someone who will say “what if my title isn’t impressive enough and I don’t get a pay rise”. If a company bases salaries on how impressive job titles are and not on whether people actually do good work then it’s a very dysfunctional workplace. But dysfunctional companies aren’t uncommon so it’s something you might reasonably have to do. In the company in question I could have described my work as “lead debugger” as I ended up doing most of the debugging on that project (as on many programming projects). The title “lead debugger” accurately described a significant part of my work and it’s work that is essential to project completion.

What do you think are the worst job titles?

365 TomorrowsRecursive Dynamic Programming

Author: R. J. Erbacher He turned the corner at a run and slammed his shoulder into the white partition leaving a smear of sweat and blood but kept going. His bare feet slapped franticly on the tile-like floor as he sprinted down the hall. He wished he could wake, but he knew this wasn’t a […]

The post Recursive Dynamic Programming appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin

Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250

The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn.

There is going to be some grumbling about the state of journalism in this review.

Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US.

I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews.

This is... not that.

It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.

Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy.

Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree.

For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either.

Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try.

Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom.

It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:

With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.

I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check.

Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power.

This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday.

As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie.

I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research.

I failed in this case, but perhaps I can serve as a warning to others.

Rating: 3 out of 10

,

Planet DebianJonathan Dowland: Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.

Korg Minilogue XD, and Zoom R8

Korg Minilogue XD, and Zoom R8

I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator.

The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:

Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

Planet DebianDaniel Lange: Compiling and installing the Gentoo Linux kernel on emerge without genkernel (part 2)

The first install of a Gentoo kernel needs to be somewhat manual if you want to optimize the kernel for the (virtual) system it boots on.

In part 1 I laid out how to improve the subsequent emerges of sys-kernel/gentoo-sources with a small drop in script to build the kernel as part of the ebuild.

Since end of last year Gentoo also supports a less manual way of emerging a kernel:

The following kernel blends are available:

  • sys-kernel/gentoo-kernel (the Gentoo kernel you can configure and compile locally - typically this is what you want if you run Gentoo)
  • sys-kernel/gentoo-kernel-bin (a pre-compiled Gentoo kernel similar to what genkernel would get you)
  • sys-kernel/vanilla-kernel (the upstream Linux kernel, again configurable and locally compiled)

So a quick walk-through for the gentoo-kernel variant:

1. Set up the correct package USE flags

We do not want an initrd and we want our own config to be re-used so:

echo "sys-kernel/gentoo-kernel -initramfs savedconfig" >> /etc/portage/package.use/gentoo-kernel

2. Preseed the saved config

The current kernel config needs to be saved as the initial savedconfig so it is found and applied for our emerge below:

mkdir -p /etc/portage/savedconfig/sys-kernel
cp -n "/usr/src/linux-$(uname -r)/.config" /etc/portage/savedconfig/sys-kernel/gentoo-kernel

3. Emerge the new kernel

emerge sys-kernel/gentoo-kernel

4. Update grub and reboot

Unfortunately this ebuild does not update grub, so we have to run grub-mkconfig manually. This can again be automated via a post_pkg_postinst() script. See the step 7 below.

But for now, let's do it manually:

grub-mkconfig -o /boot/grub/grub.cfg
# All fine? Time to reboot the machine:
reboot

5. (Optional) Prepare for the next kernel build

Run etc-update and merge the new kernel config entries into your savedconfig.

Screenshot of etc-update

The kernel should auto-build once new versions become available via portage.

Again the etc-update can be automated if you feel that is sufficiently safe to do in your environment. See step 7 below for details.

6. (Optional) Remove the old kernel sources

If you want to switch from the method based on gentoo-sources to the gentoo-kernel one, you can remove the kernel sources:

emerge -C "=sys-kernel/gentoo-sources-5*"

Be sure to update the /usr/src/linux symlink to the new kernel sources directory from gentoo-kernel, e.g.:

rm /usr/src/linux; ln -s "/usr/src/$(uname -r)" /usr/src/linux

This may be a good time for a bit more house-keeping: Clean up a bit in /usr/src/ to remove old build artefacts, /boot/ to remove old kernels and /lib/modules/ to get rid of old kernel modules.

7. (Optional) Further automate the ebuild

In part 1 we automated the kernel compile, install and a bit more via a helper function for post_pkg_postinst().

We can do the similarly for what is (currently) missing from the gentoo-kernel ebuilds:

Create /etc/portage/env/sys-kernel/gentoo-kernel with the following:

post_pkg_postinst() {
        etc-update --automode -5 /etc/portage/savedconfig/sys-kernel
        grub-mkconfig -o /boot/grub/grub.cfg
}

The upside of gentoo-kernel over gentoo-sources is that you can put "config override files" in /etc/kernel/config.d/. That way you theoretically profit from config improvements made by the upstream developers. See the Gentoo distribution kernel documentation for a sample snippet. I am fine with savedconfig for now but it is nice that Gentoo provides the flexibility to support both approaches.

Planet DebianDaniel Lange: Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)

Netatalk 3.1.9 has been released with two interesting fixes / amendments:

  • FIX: afpd: fix "admin group" option
  • NEW: afpd: new options "force user" and "force group"

Here are the full release notes for 3.1.9 for your reading pleasure.

Due to upstream now differentiating between SysVinit and systemd packages I've followed that for simplicity's sake and built libgcrypt-only builds. If you need the openssl-based tools continue to use the 3.1.8 openssl build until you have finished your migration to a safer password storage.

Warning: Read the original blog post before installing for the first time. Be sure to read the original blog post if you are new to Netatalk3 on Debian Jessie!
You'll get nowhere if you install the .debs below and don't know about the upgrade path. So RTFA.

Now with that out of the way:

Continue reading "Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)"

Planet DebianDaniel Lange: Creating iPhone/iPod/iPad notes from the shell

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks.

For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-).

As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead.

Against hossman's version 32 from 2011-02-22 I changed the following:

  • removed .pl from filename and documentation
  • added --list to list existing notes
  • added --hosteurope for Hosteurope mail account preferences and with it a sample how to add username and password into the script for unattended use
  • made the "Notes" folder the default (so -f Notes becomes obsolete)
  • added some UTF-8 conversions to make Umlauts work better (this is a mess in perl, see Jeremy Zawodny's writeup and Ivan Kurmanov's blog entry for some further solutions). Please try combinations of utf8::encode and ::decode, binmode utf8 for STDIN and/or STDOUT and the other hints from these linked blog entries in your local setup to get Umlauts and other non-7bit ASCII characters working. Be patient. There's more than one way to do it :-).

I /msg'd hossman the URL of this blog entry.

Continue reading "Creating iPhone/iPod/iPad notes from the shell"

Planet DebianDaniel Lange: The Stallman wars

So, 2021 isn't bad enough yet, but don't despair, people are working to fix that:

Welcome to the Stallman wars

Team Cancel: https://rms-open-letter.github.io/ (repo)

Team Support: https://rms-support-letter.github.io/ (repo)

Current Final stats are:

Team Cancel:  3019 signers from 1415 individual commit authors
Team Support: 6853 signers from 5418 individual commit authors

Git shortlog (Top 10):

rms_cancel.git (Last update: 2021-08-16 00:11:15 (UTC))
  1230  Neil McGovern
   251  Joan Touzet
    99  Elana Hashman
    73  Molly de Blanc
    36  Shauna
    19  Juke
    18  Stefano Zacchiroli
    17  Alexey Mirages
    16  Devin Halladay
    14  Nader Jafari

rms_support.git (Last update: 2021-09-29 07:14:39 (UTC))
  1821  shenlebantongying
  1585  nukeop
  1560  Ivanq
  1057  Victor
   880  Job Bautista
   123  nekonee
   101  Victor Gridnevsky
    41  Patrick Spek
    25  Borys Kabakov
    17  KIM Taeyeob

(data as of 2021-10-01)

Technical info:
Signers are counted from their "Signed / Individuals" sections. Commits are counted with git shortlog -s.
Team Cancel also has organizational signatures with Mozilla, Suse and X.Org being among the notable signatories. The 16 original signers of the Cancel petition are added in their count. Neil McGovern, Juke and shenlebantongying need .mailmap support as they have committed with different names.

Further reading:

12.04.2021 Statements from the accused

18.04.2021 Debian General Resolution

The Debian General Resolution (GR) vote of the developers has concluded to not issue a public statement at all, see https://www.debian.org/vote/2021/vote_002#outcome for the results.

It is better to keep quiet and seem ignorant than to speak up and remove all doubt.

See Quote Investigator for the many people that rephrased these words over the centuries. They still need to be recalled more often as too many people in the FLOSS community have forgotten about that wisdom...

01.10.2021 Final stats

It seems enough dust has settled on this unfortunate episode of mob activity now. Hence I stopped the cronjob that updated the stats above regularly. Team Support has kept adding signature all the time while Team Cancel gave up very soon after the FSF decided to stand with Mr. Stallman. So this battle was decided within two months. The stamina of the accused and determined support from some dissenting web devs trumped the orchestrated outrage of well known community figures and their publicity power this time. But history teaches us that does not mean the war is over. There will a the next opportunity to call for arms. And people will call. Unfortunately.

01.11.2024 Team Cancel is opening a new round; Team Support responds with exposing the author of "The Stallman report"

I hate to be right. Three years later than the above:

An anonymous member of team Cancel has published https://stallman-report.org/ [local pdf mirror, 504kB] to "justify our unqualified condemnation of Richard Stallman". It contains a detailed collection of quotes that are used to allege supporting (sexual) misconduct. The demand is again that Mr. Stallman "step[s] down from all positions at the FSF and the GNU project". Addressing him: "the scope and extent of your misconduct disqualifies you from formal positions of power within our community indefinitely".

Team Support has not issues a rebuttal (yet?) but has instead identified the anonymous author as Drew "sircmpwn" DeVault, a gifted software developer, but also a vocal and controversial figure in the Open Source / Free Software space. Ironically quite similar to Richard "rms" Stallman. Their piece is published at https://dmpwn.info/ [local pdf mirror, 929kB]. They also allege a proximity of Mr. DeVault to questionable "Lolita" anime preferences and societal positions to disqualify him.

Cryptogram Privacy for Agentic AI

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.

This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.

Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:

This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.

For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.

Planet DebianBen Hutchings: FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

Planet DebianDaniel Lange: Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Updates

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

   gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
   gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
   gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
   gpg: New import option "self-sigs-only".
   gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
   dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
   dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
   dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
   dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
   gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

10.08.2019

Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.

Worse Than FailureError'd: Charge Me

The lights are on here and the roof is intact and I'm grateful. Is anybody home? You decide.

Pharm fan Ian S. clucked "Perhaps they'll put those as dates on my headstone." If you're very lucky.

0

 

An anonymous reader blew the whistle on their child labor practices. "This institution exclusively uses drivers who aren't legally old enough to drive."

1

 

Greg A. grumbled "Glad that the important notice that there was no important notice was given such prominence in the official ACT web page." I have nothing more to add.

2

 

Regular reader Michael R. reported "I can confirm Hermes knows how to navigate the unknown."

4

 

Finally, faithful follower B.J.H. has been around here long enough to see this one over and over again. "For some reason people keep thinking zip codes are numbers just because they are composed of digits. When EPIC sent paper mail asking for money in December the envelope used a zip code of 1740 (and it was delivered). They solved leading zero issue by switching to base 36." Or it might just be base 26, no way to tell.

3

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Sixteen Ways to Defend a Walled City

Review: Sixteen Ways to Defend a Walled City, by K.J. Parker

Series: Siege #1
Publisher: Orbit
Copyright: April 2019
ISBN: 0-316-27080-6
Format: Kindle
Pages: 349

Sixteen Ways to Defend a Walled City is... hm, honestly, I'm not sure what the genre of this novel is. It is a story about medieval engineering and siege weapons in a Rome-inspired secondary world that so far as I can tell is not meant to match ours. There is not a hint of magic. It's not technically a fantasy, but it's marketed like a fantasy, and it's not historical fiction nor is it attempting to be alternate history. The most common description is a fantasy of logistics, so I guess I'll go with that, as long as you understand that the fantasy here is of the non-magical sort.

K.J. Parker is a pen name for Tom Holt.

Orhan is Colonel-in-Chief of the Engineers for the Robur empire, even though he's a milkface, not a blueskin like a proper Robur. (Both of those racial terms are quite offensive.) He started out as a slave, learned a trade, joined the navy as a shipwright, and worked his way up the ranks through luck and enemy action. He's canny, practical, highly respected by his men, happy to cheat and steal to get material for his projects and wages for his people, and just wants to build literal bridges. Nice, sturdy bridges that let people get from one place to another the short way.

When this book opens, Orhan is in Classis trying to requisition some rope. He is saved from discovery of his forged paperwork by pirates burning down the warehouse that held all of the rope, and then saved from the pirates by the sorts of coincidences that seem to happen to Orhan all the time. A few subsequent discoveries about what the pirates were after, and news of another unexpected attack on the empire, make Orhan nervous enough that he takes his men to do a job as far away from the City at the heart of the empire as possible. It's just his luck to return in time to find slaughtered troops and to have to sneak his men into a City already under siege.

Sixteen Ways to Defend a Walled City is told in the first person by Orhan, with an internal justification that the reader only discovers at the end of the book. That means your enjoyment of this book is going to depend a lot on how much you like Orhan's voice. This mostly worked for me; his voice is an odd combination of chatty, self-deprecating, and brusque, and it took a bit for me to get used to it, but I came around. This book is clearly competence porn — nearly all the fun of this book is seeing what desperate plan Orhan will come up with next — so it helps that Orhan does indeed come across as competent.

The part that did not work for me was the morality. You would think from the title that would be straightforward: The City is under siege, people want to capture it and kill everyone, Orhan is on the inside, and his job is to keep them out. That would have been the morality of simplistic military fiction, but most of the appeal was in watching the problem-solving anyway.

That's how the story starts, but then Parker started dropping hints of more complexity. Orhan is a disfavored minority and the Robur who run the empire are racist assholes, even though Orhan mostly gets along with the ones who work with him closely. Orhan says a few things that make the reader wonder whether the City warrants defending, and it becomes less clear whether Orhan's loyalties were as solid as they appeared to be. Parker then offers a few moral dilemmas and has Orhan not follow them in the expected directions, making me wonder where Parker was going with the morality of this story.

And then we find out that the answer is nowhere. Parker is going nowhere. None of that setup has a payoff, and the ending is deeply unsatisfying and arguably pointless.

I am not sure this is an objective analysis. This is one of those books where I would not be surprised to see someone else praise its realism. Orhan is in some ways a more likely figure than the typical hero of a book. He likes accomplishing things, he's a cheat and a liar when that serves his purposes, he's loyal to the people he considers friends in a way that often doesn't involve consulting them about what they want, and he makes decisions mostly on vibes and stubbornness. Both his cynicism and his idealism are different types of masks; beneath both, he's an incoherent muddle. You could argue that we're all that sort of muddle, deep down, and the consistent idealists are the unrealistic (and frightening) ones, and I think Parker may be attempting exactly that argument. I know some readers like this sort of fallibly human incoherence.

But wow did I ever loathe this ending because I was not reading this book for a realistic psychological profile of an average guy. I was here for the competence porn, for the fantasy of logistics, for the experience of watching someone have a plan and get shit done. Apparently that extends to needing him to be competent at morality as well, or at least think about it as hard as he thinks about siege weapons.

One of the reasons why I am primarily a genre reader is that I don't read books for depressing psychological profiles. There are enough of those in the news. I read books to spend some time in a world better than mine, where things work out the way that they are supposed to, or at least in a way that's satisfying.

The other place where this book interfered with my vibes is that it's about a war, and a lot of Orhan's projects are finding more efficient ways to kill people. Parker takes a "war is hell" perspective, and Orhan gets deeply upset at the graphic sights of mangled human bodies that are the frequent results of his plans. I feel weird complaining about this because yes, it's good to be aware of the horrific things that we do to other people in wars, but man, I just wanted to watch some effective project management. I want to enjoy unexpected lateral thinking, appreciate the friendly psychological manipulation involved in getting a project to deliver on deadline, and watch someone solve logistical problems. Battlefields provide an endless supply of interesting challenges, but then Parker feels compelled to linger on the brutal consequences of Orhan's ideas and now I'm depressed and sickened rather than enjoying myself.

I really wanted to like this book, and for a lot of the book I did, but that ending was a bottomless pit that sucked away all my enjoyment and retroactively made the rest of the book feel worse. I so wanted Parker to be going somewhere clever and surprising, and the disappointment when none of that happened was intense. This is probably an excessively negative reaction, and I will not be surprised when other people get along with this book better than I did, but not only will I not be recommending it, I'm now rather dubious about reading any more Parker.

Followed by How to Rule an Empire and Get Away With It.

Rating: 5 out of 10

365 TomorrowsAdvanced Entry Level Devices

Author: David C. Nutt My team assembled on the roof of factory near Prahova, Romania. Our objective was the next building over. Non-descript, a gray cube with the latest security measures at all entrance points, to include the heavily tinted sky lights. That’s why we were going to saw a hole in the roof. Repel […]

The post Advanced Entry Level Devices appeared first on 365tomorrows.

Krebs on SecurityxAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

,

Planet DebianIan Jackson: Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn’t be doing “politics”, and instead should just focus on technology.

But that’s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.

Today I’m talking about small-p politics

In this article I’m using “politics” in the very wide sense: us humans managing our disagreements with each other.

I’m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it’s impossible to be neutral because choosing not to take a stand is itself to take a stand.

Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.

Today I’m talking in more general terms about politics, power, and governance.

Many people working together always entails politics

Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.

Humans don’t always agree about everything. This is natural. Indeed, it’s healthy: to write the best code, we need a wide range of knowledge and experience.

When we can’t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.

Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.

This is all politics.

Consensus is great but always requiring it is harmful

Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.

When consensus can’t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.

If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.

This is where governance comes in.

Governance is like backups: we need to practice it

Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don’t see eye to eye.

In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system’s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.

That means we need to practice our governance processes. We can’t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we’ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.

So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.

First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full.

Governance should usually be routine and boring

When governance is working well it’s quite boring.

People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn’t reached, the committee, or elected leader, makes a decision.

Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.

Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.

Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.

Governance means deciding, not just mediating

By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn’t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It’s not governance if we’re advising or cajoling: in that case, we’re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.

If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted.

Otherwise, when the we need to overrule, we’ll find that we can’t, because we lack the collective practice.

To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants’ status, and not only on process questions.

On the autonomy of the programmer

Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.

Ultimately, it means sometimes overruling someone’s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.

But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer’s bad decisions can cause problems for many of the rest of us. We exasperate, “why won’t they just do the right thing”. This is futile. People have never “just”ed and they’re not going to start “just”ing now. So often the boot is on the other foot.

More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!)

Governance mechanisms are the answer.

(No, forking anything but the smallest project is very rarely a practical answer.)

Mitigate the consequences of decisions — retain flexibility

In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.

If we can convert the question from “how will the software always behave” into merely “what should the default be”, we can often save ourselves a lot of drama.

So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.

There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software — even crusty or buggy software — is a lot more fun than having unpleasant arguments.

But don’t do decisionmaking like a corporation

Many programmers’ experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.

They typically don’t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations’ goals are often bad.

You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable — typically the effects of their tenure are only properly felt well after they’ve left to mess up somewhere else.

We should select our leaders more wisely, and base decisions on substance.

If you won’t do politics, politics will do you

As a participant in a project, or a society, you can of course opt out of getting involved in politics.

You can opt out of learning how to do politics generally, and opt out of understanding your project’s governance structures. You can opt out of making judgements about disputed questions, and tell yourself “there’s merit on both sides”.

You can hate politicians indiscriminately, and criticise anyone you see doing politics.

If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You’re tacitly supporting the existing power bases. You’re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.

If enough people won’t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.

If you don’t see the politics, it’s still happening

If your governance systems don’t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.

Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.

So if you have a reasonable sized community, but don’t see your formal governance systems working — people debating things, votes, leadership making explicit decisions — that doesn’t mean everything is fine, and all the decisions are great, and there’s no politics happening.

It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won’t put up with that will leave.

The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.

Conclusions

  • Respect and support the people who are trying to fix things with politics.

  • Be informed, and, where appropriate, involved.

  • If you are in a position of authority, be willing to exercise that authority. Do more than just mediating to try to get consensus.



comment count unavailable comments

Planet DebianJonathan McDowell: Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

Cryptogram NCSC Guidance on “Advanced Cryptography”

The UK’s National Cyber Security Centre just released its white paper on “Advanced Cryptography,” which it defines as “cryptographic techniques for processing encrypted data, providing enhanced functionality over and above that provided by traditional cryptography.” It includes things like homomorphic encryption, attribute-based encryption, zero-knowledge proofs, and secure multiparty computation.

It’s full of good advice. I especially appreciate this warning:

When deciding whether to use Advanced Cryptography, start with a clear articulation of the problem, and use that to guide the development of an appropriate solution. That is, you should not start with an Advanced Cryptography technique, and then attempt to fit the functionality it provides to the problem.

And:

In almost all cases, it is bad practice for users to design and/or implement their own cryptography; this applies to Advanced Cryptography even more than traditional cryptography because of the complexity of the algorithms. It also applies to writing your own application based on a cryptographic library that implements the Advanced Cryptography primitive operations, because subtle flaws in how they are used can lead to serious security weaknesses.

The conclusion:

Advanced Cryptography covers a range of techniques for protecting sensitive data at rest, in transit and in use. These techniques enable novel applications with different trust relationships between the parties, as compared to traditional cryptographic methods for encryption and authentication.

However, there are a number of factors to consider before deploying a solution based on Advanced Cryptography, including the relative immaturity of the techniques and their implementations, significant computational burdens and slow response times, and the risk of opening up additional cyber attack vectors.

There are initiatives underway to standardise some forms of Advanced Cryptography, and the efficiency of implementations is continually improving. While many data processing problems can be solved with traditional cryptography (which will usually lead to a simpler, lower-cost and more mature solution) for those that cannot, Advanced Cryptography techniques could in the future enable innovative ways of deriving benefit from large shared datasets, without compromising individuals’ privacy.

NCSC blog entry.

Planet DebianGuido Günther: Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements.

phosh

  • Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
  • Update adaptive app list (MR)
  • Fix missing icon when editing folders (MR)
  • Use StartupWMClass for better app-id matching (MR)
  • Fix failing CI tests, fix inverted logic, and add tests (MR)
  • Fix a sporadic test failure (MR)
  • Add support for "do not disturb" by adding a status page to feedback quick settings (MR)
  • monitor: Don't track make/model (MR)
  • Wi-Fi status page: Correctly show tick mark with multiple access points (MR)
  • Avoid broken icon in polkit prompts (MR)
  • Lockscreen auth cleanups (MR)
  • Sync mobile data toggle to sim lock too (MR)
  • Don't let the OSD display cover whole output with a transparent window (MR)

phoc

  • Allow to specify listening socket (MR)
  • Continue to catch up with wlroots git (MR)
  • Disconnect input-method signals on destroy (MR)
  • Disconnect gtk-shell and output signals on destroy (MR)
  • Don't init decorations too early (MR)
  • Allow to disable XWayland on the command line (MR)

phosh-mobile-settings

  • Allow to set overview wallpaper (MR)
  • Ask for confirmation before resetting favorits (MR)
  • Add separate volume controls for notifictaions, multimedia and alerts (MR)
  • Tweak warnings (MR)

pfs

  • Fix build on a single CPU (MR)

feedbackd

  • Move to fdo (MR)
  • Allow to set media-role (MR)
  • Doc updates (MR)
  • Sort LEDs by "usefulness" (MR)
  • Ensure multicolor LEDs have multiple components (MR)
  • Add example wireplumber config (MR)

feedbackd-device-themes

  • Release 0.8.2
  • Move to fdo (MR)
  • Override notification-missed-generic on fajita (MR)
  • Run ci-fairy here too (MR)
  • fajita: Add notification-missed-generic (MR)

gmobile

  • Build Vala support (vapi files) too (MR)
  • Add support for timers that can take the system out of suspend (MR)

Debian

git-buildpackage

  • Don't suppress dch errors (MR)
  • Release 0.9.38

wlroots

  • Get text-input-v3 a bit more in line with other protocols (MR)

ModemManager

  • Cell broadcast support for QMI modems (MR)

Libqmi

  • QMI channel setting (MR)
  • Switch to gi-docgen (MR)
  • loc: Fix since annotations (MR)

gnome-clocks

  • Add wakeup timer to take device out of suspend (MR)

gnome-calls

  • CallBox: Switch between text entry (for SIP) and dialpad (MR)

qmi-parse-kernel-dump

  • Allow to filer on message types and some other small improvements (MR)

xwayland-run

  • Support phoc (MR)

osmo-cbc

  • Small error handling improvements to osmo-cbc (MR)

phosh-nightly

  • Handle feedbackd fdo move (MR)

Blog posts

Bugs

  • Resuming of video streams fails with newer gstreamer (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Worse Than FailureCodeSOD: Pulling at the Start of a Thread

For testing networking systems, load simulators are useful: send a bunch of realistic looking traffic and see what happens as you increase the amount of sent traffic. These sorts of simulators often rely on being heavily multithreaded, since one computer can, if pushed, generate a lot of network traffic.

Thus, when Jonas inherited a heavily multithreaded system for simulating load, that wasn't a surprise. The surprise was that the developer responsible for it didn't really understand threading in Java. Probably in other languages too, but in this case, Java was what they were using.

        public void startTraffic()
        {
            Configuration.instance.inititiateStatistics();
            Statistics.instance.addStatisticListener(gui);
           
            if (t != null)
            {
                if (t.isAlive())
                {
                    t.destroy();
                }
            }
           
            t = new Thread(this);
            t.start();
        }

Look, this is not a good way to manage threads in Java. I don't know if I'd call it a WTF, but it's very much a "baby's first threading" approach. There are better abstractions around threads that would avoid the need to manage thread instances directly. I certainly don't love situations where a Runnable also manages its own thread instance.

This is almost certainly a race condition, but I don't know if this function is called from multiple threads (but I suspect it might be).

But what's more interesting is where this code gets called. You see, starting a thread could trigger an exception, so you need to handle that:

        public void run()
        {
            while (true)
            {
                try
                {
                    loaderMain.startTraffic();
                    break;
                }
                catch (Exception e)
                {
                    System.out.println("Exception in main loader thread!");
                    e.printStackTrace();
                }
            }
        }

Inside of an infinite loop, we try to start traffic. If we succeed, we break out of the loop. If we fail, well, we try and try again and again and again and again and again and again…

Jonas writes:

Since I'm the only one that dares to wade through the opaque mess of code that somehow, against all odds, manages to work most of the time, I get to fix it whenever it presents strange behavior.

I suspect it's going to present much more strange behavior in the future.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe Temporality of Pain

Author: Nicholas Johnson “But what if you didn’t have to experience that pain now? What if you already did?” The doctor leaned forward, placing his elbows on the shiny glass desk, smiling with predatory teeth. I tapped my knee and tried to avoid eye contact, angry at my therapist for suggesting this treatment. “All pain […]

The post The Temporality of Pain appeared first on 365tomorrows.

Planet DebianPaul Wise: FLOSS Activities April 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Patches: notmuch-mutt patchset

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: Review: Beyond Pain

Review: Beyond Pain, by Kit Rocha

Series: Beyond #3
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 328

Beyond Pain is a science fiction dystopian erotic romance novel and a direct sequel to Beyond Control. Following the romance series convention, each book features new protagonists who were supporting characters in the previous book. You could probably start here if you wanted, but there are significant spoilers here for earlier books in the series. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Six has had a brutally hard life. She was rescued from an awful situation in a previous book and is now lurking around the edges of the Sector Four gang, oddly fascinated (as are we all) with their constant sexuality and trying to decide if she wants to, and can, be part of their world. Bren is one of the few people she lets get close: a huge bruiser who likes cage fights and pain but treats Six with a protective, careful respect that she finds comforting. This book is the story of Six and Bren getting to the bottom of each other's psychological hangups while the O'Kanes start taking over Six's former sector.

Yes, as threatened, I read another entry in the dystopian erotica series because I keep wondering how these people will fuck their way into a revolution. This is not happening very quickly, but it seems obvious that is the direction the series is going.

It's been a while since I've reviewed one of these, so here's another variation of the massive disclaimer: I think erotica is harder to review than any other genre because what people like is so intensely personal and individual. This is not even an attempt at an erotica review. I'm both wholly unqualified and also less interested in that part of the book, which should lead you to question my reading choices since that's a good half of the book.

Rather, I'm reading these somewhat for the plot and mostly for the vibes. This is not the most competent collection of individuals, and to the extent that they are, it's mostly because the men (who are, as a rule, charismatic but rather dim) are willing to listen to the women. What they are good at is communication, or rather, they're good about banging their heads (and other parts) against communication barriers until they figure out a way around them. Part of this is an obsession with consent that goes quite a bit deeper than the normal simplistic treatment. When you spend this much time trying to understand what other people want, you have to spend a lot of time communicating about sex, and in these books that means spending a lot of time communicating about everything else as well.

They are also obsessively loyal and understand the merits of both collective action and in making space for people to do the things that they are the best at, while still insisting that people contribute when they can. On the surface, the O'Kanes are a dictatorship, but they're run more like a high-functioning collaboration. Dallas leads because Dallas is good at playing the role of leader (and listening to Lex), which is refreshingly contrary to how things work in the real world right now.

I want to be clear that not only is this erotica, this is not the sort of erotica where there's a stand-alone plot that is periodically interrupted by vaguely-motivated sex scenes that you can skim past. These people use sex to communicate, and therefore most of the important exchanges in the book are in the middle of a sex scene. This is going to make this novel, and this series, very much not to the taste of a lot of people, and I cannot be emphatic enough about that warning.

But, also, this is such a fascinating inversion. It's common in media for the surface plot of the story to be full of sexual tension, sometimes to the extent that the story is just a metaphor for the sex that the characters want to have. This is the exact opposite of that: The sex is a metaphor for everything else that's going on in the story. These people quite literally fuck their way out of their communication problems, and not in an obvious or cringy way. It's weirdly fascinating?

It's also possible that my reaction to this series is so unusual as to not be shared by a single other reader.

Anyway, the setup in this story is that Six has major trust issues and Bren is slowly and carefully trying to win her trust. It's a classic hurt/comfort setup, and if that had played out in the way that this story often does, Bren would have taken the role of the gentle hero and Six the role of the person he rescued. That is not at all where this story goes. Six doesn't need comfort; Six needs self-confidence and the ability to demand what she wants, and although the way Beyond Pain gets her there is a little ham-handed, it mostly worked for me. As with Beyond Shame, I felt like the moral of the story is that the O'Kane men are just bright enough to stop doing stupid things at the last possible moment. I think Beyond Pain worked a bit better than the previous book because Bren is not quite as dim as Dallas, so the reader doesn't have to suffer through quite as many stupid decisions.

The erotica continues to mostly (although not entirely) follow traditional gender roles, with dangerous men and women who like attention. Presumably most people are reading these books for the sex, which I am wholly unqualified to review. For whatever it's worth, the physical descriptions are too mechanical for me, too obsessed with the precise structural assemblage of parts in novel configurations. I am not recommending (or disrecommending) these books, for a whole host of reasons. But I think the authors deserve to be rewarded for understanding that sex can be communication and that good communication about difficult topics is inherently interesting in a way that (at least for me) transcends the erotica.

I bet I'm going to pick up another one of these about a year from now because I'm still thinking about these people and am still curious about how they are going to succeed.

Followed by Beyond Temptation, an interstitial novella. The next novel is Beyond Jealousy.

Rating: 6 out of 10

,

Krebs on SecurityAlleged ‘Scattered Spider’ Member Extradited to U.S.

A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.

Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.

Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.

As first reported by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was first reported last week by Bloomberg.

Members of Scattered Spider have been tied to the 2023 ransomware attacks against MGM and Caesars casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.

Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.

In August 2022, KrebsOnSecurity reviewed data harvested in a months-long cybercrime campaign by Scattered Spider involving countless SMS-based phishing attacks against employees at major corporations. The security firm Group-IB called them by a different name — 0ktapus, because the group typically spoofed the identity provider Okta in their phishing messages to employees at targeted firms.

A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.

The complaint against Buchanan (PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.

Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.

“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”

U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.

In November 2024, federal prosecutors in Los Angeles unsealed criminal charges against Buchanan and four other alleged Scattered Spider members, including Ahmed Elbadawy, 23, of College Station, Texas; Joel Evans, 25, of Jacksonville, North Carolina; Evans Osiebo, 20, of Dallas; and Noah Urban, 20, of Palm Coast, Florida. KrebsOnSecurity reported last year that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.

Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.

Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.

LongNowRick Prelinger

Rick Prelinger

2 special screenings of a new LOST LANDSCAPES film by Rick Prelinger will be on Wednesday 12/3/25 and Thursday 12/4/25 at the Herbst Theater. Long Now Members can reserve a pair of tickets on either night!

Each year LOST LANDSCAPES casts an archival gaze on San Francisco and its surrounding areas. The film is drawn from newly scanned archival footage, including home movies, government-produced and industrial films, feature film outtakes and other surprises from the Prelinger Archives collection and elsewhere.

Planet DebianRussell Coker: Links April 2025

Asianometry has an interesting YouTube video about elecrolytic capacitors degrading and how they affect computers [1]. Keep your computers cool people!

Biella Coleman (famous for studying the Anthropology of Debian) and Eric Reinhart wrote an interesting article about MAHA (Make America Healthy Again) and how it ended up doing exactly the opposite of what was intended [2].

SciShow has an informative video about lung cancer cases among non-smokers, the risk factors are genetics, Radon, and cooking [3].

Ian Jackson wrote an insightful blog post about whether Rust is “woke” [4].

Bruce Schneier write an interesting blog post about research into making AIs Trusted Third Parties [5]. This has the potential to solve some cryptology problems.

CHERIoT is an interesting project for controlling all jump statements in RISC-V among other related security features [6]. We need this sort of thing for IoT devices that will run for years without change.

Brian Krebs wrote an informative post about how Trump is attacking the 1st Amendment of the US Constitution [7].

The Register has an interesting summary of the kernel “enclave” and “exclave” functionality in recent Apple OSs [8].

Dr Gabor Mate wrote an interesting psychological analysis of Hillary Clinton and Donald Trump [9].

ChoiceJacking is an interesting variant of the JuiceJacking attack on mobile phones by hostile chargers [10]. They should require input for security sensitive events to come from the local hardware not USB or Bluetooth.

Planet DebianSimon Josefsson: Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

Worse Than FailureCodeSOD: Find the First Function to Cut

Sebastian is now maintaining a huge framework which, in his words, "could easily be reduced in size by 50%", especially because many of the methods in it are reinvented wheels that are already provided by .NET and specifically LINQ.

For example, if you want the first item in a collection, LINQ lets you call First() or FirstOrDefault() on any collection. The latter option makes handling empty collections easier. But someone decided to reinvent that wheel, and like so many reinvented wheels, it's worse.

public static LoggingRule FindFirst (this IEnumerable<LoggingRule> rules, Func<LoggingRule, bool> predicate)
{
        foreach (LoggingRule rule in rules) {
                return rule;
        }
        return null;
}

This function takes a list of logging rules and a function to filter the logging rules, starts a for loop to iterate over the list, and then simply returns the first element in the list, thus exiting the for loop. If the loop doesn't contain any elements, we return null.

From the signature, I'd expect this function to do filtering, but it clearly doesn't. It just returns the first element, period. And again, there's already a built-in function for that. I don't know why this is exists, but I especially dislike that it's so misleading.

There's only one positive to say about this: if you did want to reduce the size of the framework by 50%, it's easy to see where I'd start.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianUtkarsh Gupta: FOSS Activites in April 2025

Here’s my 67th monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 76th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here’s what I did:

  • Updating Matomo to v5.3.1.
  • Lots of bursary stuff for DC25. We rolled out the results for the first batch.
  • Helping Andreas Tille with and around FTP team bits.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 51st month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:

  • Released 25.04 Plucky Puffin! \o/
  • Helped open the 25.10 Questing Quokka archive. Let the development begin!
  • Jon, VP of Engineering, asked me to lead the Canonical Release team - that was definitely not something I saw coming. :)
  • We’re now doing Ubuntu monthly releases for the devel releases - I’ll be the tech lead for the project.
  • Preparing for the May sprints - too many new things and new responsibilities. :)

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my 67th month as a Debian LTS and 54th month as a Debian ELTS paid contributor.
Due to DC25 bursary work, Ubuntu 25.04 release, and other travel bits, I only worked for 2.00 hours for LTS and 4.50 hours for ELTS.

I did the following things:

  • [ELTS] Had already backported patches for adminer for the following CVEs:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/adminer.
    • As the same CVEs are affected LTS, we decided to release for LTS first and then for ELTS but since I had no hours for LTS, I decided to do a bit more of testing for ELTS to make sure things don’t regress in buster.
    • Will prepare LTS (and also s-p-u, sigh) updates this month and get back to ELTS thereafter.
  • [LTS] Started to prepare the LTS update for adminer for the same CVEs as for ELTS:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Haven’t fully backported the patch yet but this is what I intend to do for this month (now that I have hours :D).
  • [LTS] Partially attended the LTS meeting on Jitsi. Summary here.
    • “Partially” because I was fighting SSO auth issues with Jitsi. Looks like there were some upstream issues/activity and it was resulting in gateway crashes but all good now.
    • I was following the running notes and keeping up with things as much as I could. :)

Until next time.
:wq for today.

365 TomorrowsJust a Little off the Sides, Please

Author: David Margolin Maggie and Trent were a self-sufficient young couple, both remarkably dexterous and tech savvy.  They did all their home repairs, serviced their own cars, and every few weeks Maggie cut Trent’s hair. “Home haircuts are great– think about how much money we’ve saved,” Maggie said proudly. “All we need to do is […]

The post Just a Little off the Sides, Please appeared first on 365tomorrows.

,

Cryptogram WhatsApp Case Against NSO Group Progressing

Meta is suing NSO Group, basically claiming that the latter hacks WhatsApp and not just WhatsApp users. We have a procedural ruling:

Under the order, NSO Group is prohibited from presenting evidence about its customers’ identities, implying the targeted WhatsApp users are suspected or actual criminals, or alleging that WhatsApp had insufficient security protections.

[…]

In making her ruling, Northern District of California Judge Phyllis Hamilton said NSO Group undercut its arguments to use evidence about its customers with contradictory statements.

“Defendants cannot claim, on the one hand, that its intent is to help its clients fight terrorism and child exploitation, and on the other hand say that it has nothing to do with what its client does with the technology, other than advice and support,” she wrote. “Additionally, there is no evidence as to the specific kinds of crimes or security threats that its clients actually investigate and none with respect to the attacks at issue.”

I have written about the issues at play in this case.

Planet DebianPetter Reinholdtsen: OpenSnitch 1.6.8 is now in Trixie

After some days of effort, I am happy to report that the great interactive application firewall OpenSnitch got a new version in Trixie, now with the Linux kernel based ebpf sniffer included for better accuracy. This new version made it possible for me to finally track down the rule required to avoid a deadlock when using it on a machine with the user home directory on NFS. The problematic connection originated from the Linux kernel itself, causing the /proc based version in Debian 12 to fail to properly attribute the connection and cause the OpenSnitch daemon to block while waiting for the Python GUI, which was unable to continue because the home directory was blocked waiting for the OpenSnitch daemon. A classic deadlock reported upstream for a more permanent solution.

I really love the control over all the programs and web pages calling home that OpenSnitch give me. Just today I discovered a strange connection to sb-ssl.google.com when I pulled up a PDF passed on to me via a Mattermost installation. It is some times hard to know which connections to block and which to go through, but after running it for a few months, the default rule set start to handle most regular network traffic and I only have to have a look at the more unusual connections.

If you would like to know more about what your machines programs are doing, install OpenSnitch today. It is only a apt install opensnitch away. :)

I hope to get the 1.6.9 version in experimental into Trixie before the archive enter hard freeze. This new version should have no relevant changes not already in the 1.6.8-11 edition, as it mostly contain Debian patches, but will give it a few days testing to see if there are any surprises. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDaniel Lange: Weird times ... or how the New York DEC decided the US presidential elections

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

*[DEC]: Department of Environmental Conservation

Worse Than FailureCodeSOD: The Wrong Kind of Character

Today's code, at first, just looks like using literals instead of constants. Austin sends us this C#, from an older Windows Forms application:

if (e.KeyChar == (char)4) {   // is it a ^D?
        e.Handled = true;
        DoStuff();
}
else if (e.KeyChar == (char)7) {   // is it a ^g?
        e.Handled = true;
        DoOtherStuff();
}
else if (e.KeyChar == (char)Keys.Home) {
        e.Handled = true;
        SpecialGoToStart();
}
else if (e.KeyChar == (char)Keys.End) {
        e.Handled = true;
        SpecialGoToEnd();
} 

Austin discovered this code when looking for a bug where some keyboard shortcuts didn't work. He made some incorrect assumptions about the code- first, that they were checking for a KeyDown or KeyUp event, a pretty normal way to check for keyboard shortcuts. Under that assumption, a developer would compare the KeyEventArgs.KeyCode property against an enum- something like e.KeyCode == Keys.D && Keys.Control, for a CTRL+D. That's clearly not what's happening here.

No, here, they used the KeyPressEvent, which is meant to represent the act of typing. That gives you a KeyPressEventArgs with a KeyChar property- because again, it's meant to represent typing text not keyboard shortcuts. They used the wrong event type, as it won't tell them about modifier keys in use, or gracefully handle the home or end keys. KeyChar is the ASCII character code of the key press: which, in this case, CTRL+D is the "end of transmit" character in ASCII (4), and CTRL+G is the goddamn bell character (7). So those two branches work.

But home and end don't have ASCII code points. They're not characters that show up in text. They get key codes, which represent the physical key pressed, not the character of text. So (char)Keys.Home isn't really a meaningful operation. But the enum is still a numeric value, so you can still turn it into a character- it just turns into a character that emphatically isn't the home key. It's the "$". And Keys.End turns into a "#".

It wasn't very much work for Austin to move the event handler to the correct event type, and switch to using KeyCodes, which were both more correct and more readable.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsLess Traveled

Author: Majoki One cannot speak of the Universe. One can only speak of rocking chairs, carnations and a pen. This is the path to understanding. Take it on good authority. Travel writers speak of ordeals as the ideal. I would not say that losing my tablature in Genra was an ordeal in and of itself, […]

The post Less Traveled appeared first on 365tomorrows.

Cryptogram Applying Security Engineering to Prompt Injection Security

This seems like an important advance in LLM security against prompt injection:

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

[…]

To understand CaMeL, you need to understand that prompt injections happen when AI systems can’t distinguish between legitimate user commands and malicious instructions hidden in content they’re processing.

[…]

While CaMeL does use multiple AI models (a privileged LLM and a quarantined LLM), what makes it innovative isn’t reducing the number of models but fundamentally changing the security architecture. Rather than expecting AI to detect attacks, CaMeL implements established security engineering principles like capability-based access control and data flow tracking to create boundaries that remain effective even if an AI component is compromised.

Research paper. Good analysis by Simon Willison.

I wrote about the problem of LLMs intermingling the data and control paths here.

Planet DebianFreexian Collaborators: Freexian partners with Invisible Things Lab to extend security support for Xen hypervisor

Freexian is pleased to announce a partnership with Invisible Things Lab to extend the security support of the Xen type-1 hypervisor version 4.17. Three years after its initial release, Xen 4.17, the version available in Debian 12 “bookworm”, will reach end-of-security-support status upstream on December 2025. The aim of our partnership with Invisible Things is to extend the security support until, at least, July 2027. We may also explore a possibility of extending the support until June 2028, to coincide with the end of Debian 12 LTS support-period.

The security support of Xen in Debian, since Debian 8 “jessie” until Debian 11 “bullseye”, reached its end before the end of the life cycle of the release. We aim then to significantly improve the situation of Xen in Debian 12. As with similar efforts, we would like to mention that this is an experiment and that we will do our best to make it a success. We are aiming to try and to extend the security support for Xen versions included in future Debian releases, including Debian 13 “trixie”.

In the long term, we hope that this effort will ultimately allow the Xen Project to increase the official security support period for Xen releases from the current three years to at least five years, with the extra work being funded by the community of companies benefiting from the longer support period.

If your company relies on Xen and wants to help sustain LTS versions of Xen, please reach out to us. For companies using Debian, the simplest way is to subscribe to Freexian’s Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Xen LTS when you send in your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to help you contribute.

In the mean time, this initiative has been made possible thanks to the current LTS sponsors and ELTS customers. We hope the entire community of Debian and Xen users will benefit from this initiative.

For any queries you might have, please don’t hesitate to contact us at sales@freexian.com.

About Invisible Things Lab

Invisible Things Lab (ITL) offers low-level security consulting auditing services for x86 virtualization technologies; C, C++, and assembly codebases; Intel SGX; binary exploitation and mitigations; and more. ITL also specializes in Qubes OS and Gramine consulting, including deployment, debugging, and feature development.

,

Cryptogram Windscribe Acquitted on Charges of Not Collecting Users’ Data

The company doesn’t keep logs, so couldn’t turn over data:

Windscribe, a globally used privacy-first VPN service, announced today that its founder, Yegor Sak, has been fully acquitted by a court in Athens, Greece, following a two-year legal battle in which Sak was personally charged in connection with an alleged internet offence by an unknown user of the service.

The case centred around a Windscribe-owned server in Finland that was allegedly used to breach a system in Greece. Greek authorities, in cooperation with INTERPOL, traced the IP address to Windscribe’s infrastructure and, unlike standard international procedures, proceeded to initiate criminal proceedings against Sak himself, rather than pursuing information through standard corporate channels.

Planet DebianScarlett Gately Moore: KDE Snaps and life. Spirits are up, but I need a little help please

I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!

I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.

Fixed kate bug https://bugs.kde.org/show_bug.cgi?id=503285

Thanks for stopping by.

Planet DebianSergio Talens-Oliag: ArgoCD Autopilot

For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects I was working on.

As now I have some spare time I’ve decided I’m going to play a little with Argo CD, Flux and Kluctl to test them and be able to use one of them in a real project in the future if it looks appropriate.

On this post I will use Argo-CD Autopilot to install argocd on a k3d local cluster installed using OpenTofu to test the autopilot approach of managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as well).

Installing tools locally with arkade

Recently I’ve been using the arkade tool to install kubernetes related applications on Linux servers and containers, I usually get the applications with it and install them on the /usr/local/bin folder.

For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the $HOME/.arkade/bin folder if missing (I’m assuming that docker is already available, as it is not installable with arkade):

#!/bin/sh

# TOOLS LIST
ARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu"

# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Install or update arkade
if command -v arkade >/dev/null; then
  echo "Trying to update the arkade application"
  sudo arkade update
else
  echo "Installing the arkade application"
  curl -sLS https://get.arkade.dev | sudo sh
fi

echo ""
echo "Installing tools with arkade"
echo ""
for app in $ARKADE_APPS; do
  app_path="$(command -v $app)" || true
  if [ "$app_path" ]; then
    echo "The application '$app' already available on '$app_path'"
  else
    arkade get "$app"
  fi
done

cat <<EOF

Add the ~/.arkade/bin directory to your PATH if tools have been installed there

EOF

The rest of scripts will add the binary directory to the PATH if missing to make sure things work if something was installed there.

Creating a k3d cluster with opentofu

Although using k3d directly will be a good choice for the creation of the cluster, I’m using tofu to do it because that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.

The main.tf file is as follows:

terraform {
  required_providers {
    k3d = {
      source  = "moio/k3d"
      version = "0.0.12"
    }
    sops = {
      source = "carlpett/sops"
      version = "1.2.0"
    }
  }
}

data "sops_file" "secrets" {
    source_file = "secrets.yaml"
}

resource "k3d_cluster" "argocd_cluster" {
  name    = "argocd"
  servers = 1
  agents  = 2

  image   = "rancher/k3s:v1.31.5-k3s1"
  network = "argocd"
  token   = data.sops_file.secrets.data["token"]

  port {
    host_port      = 8443
    container_port = 443
    node_filters = [
      "loadbalancer",
    ]
  }

  k3d {
    disable_load_balancer     = false
    disable_image_volume      = false
  }

  kubeconfig {
    update_default_kubeconfig = true
    switch_current_context    = true
  }

  runtime {
    gpu_request = "all"
  }
}

The k3d configuration is quite simple, as I plan to use the default traefik ingress controller with TLS I publish the 443 port on the hosts 8443 port, I’ll explain how I add a valid certificate on the next step.

I’ve prepared the following script to initialize and apply the changes:

#!/bin/sh

set -e

# VARIABLES
# Default token for the argocd cluster
K3D_CLUSTER_TOKEN="argocdToken"
# Relative PATH to install the k3d cluster using terr-iaform
K3D_TF_RELPATH="k3d-tf"
# Secrets yaml file
SECRETS_YAML="secrets.yaml"
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the k3d-tf dir
cd "$WORK_DIR/$K3D_TF_RELPATH" || exit 1

# Create secrets.yaml file and encode it with sops if missing
if [ ! -f "$SECRETS_YAML" ]; then
  echo "token: $K3D_CLUSTER_TOKEN" >"$SECRETS_YAML"
  sops encrypt -i "$SECRETS_YAML"
fi

# Initialize terraform
tofu init

# Apply the configuration
tofu apply

Adding a wildcard certificate to the k3d ingress

As an optional step, after creating the k3d cluster I’m going to add a default wildcard certificate for the traefik ingress server to be able to use everything with HTTPS without certificate issues.

As I manage my own DNS domain I’ve created the lo.mixinet.net and *.lo.mixinet.net DNS entries on my public and private DNS servers (both return 127.0.0.1 and ::1) and I’ve created a TLS certificate for both entries using Let’s Encrypt with Certbot.

The certificate is updated automatically on one of my servers and when I need it I copy the contents of the fullchain.pem and privkey.pem files from the /etc/letsencrypt/live/lo.mixinet.net server directory to the local files lo.mixinet.net.crt and lo.mixinet.net.key.

After copying the files I run the following file to install or update the certificate and configure it as the default for traefik:

#!/bin/sh
# Script to update the
secret="lo-mixinet-net-ingress-cert"
cert="${1:-lo.mixinet.net.crt}"
key="${2:-lo.mixinet.net.key}"
if [ -f "$cert" ] && [ -f "$key" ]; then
  kubectl -n kube-system create secret tls $secret \
    --key=$key \
    --cert=$cert \
    --dry-run=client --save-config -o yaml  | kubectl apply -f -
  kubectl apply -f - << EOF
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
  name: default
  namespace: kube-system

spec:
  defaultCertificate:
    secretName: $secret
EOF
else
  cat <<EOF
To add or update the traefik TLS certificate the following files are needed:

- cert: '$cert'
- key: '$key'

Note: you can pass the paths as arguments to this script.
EOF
fi

Once it is installed if I connect to https://foo.lo.mixinet.net:8443/ I get a 404 but the certificate is valid.

Installing argocd with argocd-autopilot

Creating a repository and a token for autopilot

I’ll be using a project on my forgejo instance to manage argocd, the repository I’ve created is on the URL https://forgejo.mixinet.net/blogops/argocd and I’ve created a private user named argocd that only has write access to that repository.

Logging as the argocd user on forgejo I’ve created a token with permission to read and write repositories that I’ve saved on my pass password store on the mixinet.net/argocd@forgejo/repository-write entry.

Bootstrapping the installation

To bootstrap the installation I’ve used the following script (it uses the previous GIT_REPO and GIT_TOKEN values):

#!/bin/sh

set -e

# VARIABLES
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the working directory
cd "$WORK_DIR" || exit 1

# Set GIT variables
if [ -z "$GIT_REPO" ]; then
  export GIT_REPO="https://forgejo.mixinet.net/blogops/argocd.git"
fi
if [ -z "$GIT_TOKEN" ]; then
  GIT_TOKEN="$(pass mixinet.net/argocd@forgejo/repository-write)"
  export GIT_TOKEN
fi

argocd-autopilot repo bootstrap --provider gitea

The output of the execution is as follows:

❯ bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: "", installation path: ""
INFO using context: "k3d-argocd", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created

INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context 'autopilot' updated

INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:

    kubectl port-forward -n argocd svc/argocd-server 8080:80

Now we have the argocd installed and running, it can be checked using the port-forward and connecting to https://localhost:8080/ (the certificate will be wrong, we are going to fix that in the next step).

Updating the argocd installation in git

Now that we have the application deployed we can clone the argocd repository and edit the deployment to disable TLS for the argocd server (we are going to use TLS termination with traefik and that needs the server running as insecure, see the Argo CD documentation)

❯ ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
❯ cd argocd
❯ edit bootstrap/argo-cd/kustomization.yaml
❯ git commit -m 'Disable TLS for the argocd-server'

The changes made to the kustomization.yaml file are the following:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -11,6 +11,11 @@ configMapGenerator:
         key: git_username
         name: autopilot-secret
   name: argocd-cm
+  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
+- behavior: merge
+  literals:
+  - "server.insecure=true"
+  name: argocd-cmd-params-cm
 kind: Kustomization
 namespace: argocd
 resources:

Once the changes are pushed we sync the argo-cd application manually to make sure they are applied:

argo cd sync

As a test we can download the argocd-cmd-params-cm ConfigMap to make sure everything is OK:

apiVersion: v1
data:
  server.insecure: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"server.insecure":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"argo-cd","app.kubernetes.io/name":"argocd-cmd-params-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cmd-params-cm","namespace":"argocd"}}
  creationTimestamp: "2025-04-27T17:31:54Z"
  labels:
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cmd-params-cm
  namespace: argocd
  resourceVersion: "16731"
  uid: a460638f-1d82-47f6-982c-3017699d5f14

As this simply changes the ConfigMap we have to restart the argocd-server to read it again, to do it we delete the server pods so they are re-created using the updated resource:

❯ kubectl delete pods -n argocd -l app.kubernetes.io/name=argocd-server

After doing this the port-forward command is killed automatically, if we run it again the connection to get to the argocd-server has to be done using HTTP instead of HTTPS.

Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and GRPC against the address argocd.lo.mixinet.net using the wildcard TLS certificate we installed earlier.

To do it we to edit the bootstrap/argo-cd/kustomization.yaml file to add the ingress_route.yaml file to the deployment:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -20,3 +20,4 @@ kind: Kustomization
 namespace: argocd
 resources:
 - github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
+- ingress_route.yaml

The ingress_route.yaml file contents are the following:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: argocd-server
  namespace: argocd
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`)
      priority: 10
      services:
        - name: argocd-server
          port: 80
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`) && Header(`Content-Type`, `application/grpc`)
      priority: 11
      services:
        - name: argocd-server
          port: 80
          scheme: h2c
  tls:
    certResolver: default

After pushing the changes and waiting a little bit the change is applied and we can access the server using HTTPS and GRPC, the first way can be tested from a browser and the GRPC using the command line interface:

❯ argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.lo.mixinet.net:8443' updated
❯ argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root

So things are working fine …​ and that is all on this post, folks!

Worse Than FailureCodeSOD: Objectifying Yourself

"Boy, stringly typed data is hard to work with. I wish there were some easier way to work with it!"

This, presumably, is what Gary's predecessor said. Followed by, "Wait, I have an idea!"

public static Object createValue(String string) {
	Object value = parseBoolean(string);
	if (value != null) {
		return value;
	}

	value = parseInteger(string);
	if (value != null) {
		return value;
	}

	value = parseDouble(string);
	if (value != null) {
		return value;
	}

	return string;
}

This takes a string, and then tries to parse it, first into a boolean, failing that into an integer, and failing that into a double. Otherwise, it returns the original string.

And it returns an object, which means you still get to guess what's in there even after this. You just get to guess what it returned, and hope you cast it to the correct type. Which means this almost certainly is called like this:

boolean myBoolField = (Boolean)createValue(someStringContainingABool);

Which makes the whole thing useless, which is fun.

Gary found this code in a "long since abandoned" project, and I can't imagine why it ended up getting abandoned.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsMimicry

Author: Julian Miles, Staff Writer Linda looks about as she blows into cupped hands. It’s been a brutal November, and the forecast is that it’ll be a white Christmas from everything freezing over instead of snow. She glances at Will. “So what’s a polinismum again?” He gives her a withering stare. “‘Polynex Quismirum’. A living […]

The post Mimicry appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, March 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In March, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Adrian Bunk did 51.5h (out of 0.0h assigned and 51.5h from previous period).
  • Andreas Henriksson did 20.0h (out of 20.0h assigned).
  • Andrej Shadura did 6.0h (out of 10.0h assigned), thus carrying over 4.0h to the next month.
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 12.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 12.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 26.0h (out of 23.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 37.0h (out of 36.5h assigned and 0.75h from previous period), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 8.25h (out of 11.0h assigned and 9.0h from previous period), thus carrying over 11.75h to the next month.
  • Jochen Sprickerhof did 18.0h (out of 24.25h assigned and 3.0h from previous period), thus carrying over 9.25h to the next month.
  • Lee Garrett did 10.25h (out of 0.0h assigned and 42.0h from previous period), thus carrying over 31.75h to the next month.
  • Lucas Kanashiro did 4.0h (out of 0.0h assigned and 56.0h from previous period), thus carrying over 52.0h to the next month.
  • Markus Koschany did 27.25h (out of 27.25h assigned).
  • Roberto C. Sánchez did 8.25h (out of 7.0h assigned and 17.0h from previous period), thus carrying over 15.75h to the next month.
  • Santiago Ruano Rincón did 17.5h (out of 19.75h assigned and 5.25h from previous period), thus carrying over 7.5h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Sylvain Beucler did 32.0h (out of 31.0h assigned and 1.25h from previous period), thus carrying over 0.25h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 7.75h (out of 12.0h assigned), thus carrying over 4.25h to the next month.
  • Utkarsh Gupta did 15.0h (out of 15.0h assigned).

Evolution of the situation

In March, we have released 31 DLAs.

  • Notable security updates:
    • linux-6.1 (1 2)and linux, prepared by Ben Hutchings, fixed an extensive list of vulnerabilities
    • firefox-esr, prepared by Emilio Pozuelo Monfort, fixed a variety of vulnerabilities
    • intel-microcode, prepared by Tobias Frost, fixed several local privilege escalation, denial of service, and information disclosure vulnerabilities
    • vim, prepared by Sean Whitton, fixed a multitude of vulnerabilities, including many application crashes, buffer overflows, and out-of-bounds reads

The recent trend of contributions from contributors external to the formal LTS team has continued. LTS contributor Sylvain Beucler reviewed and facilitated an update to openvpn proposed by Aquila Macedo, resulting in the publication of DLA 4079-1. Thanks a lot to Aquila for preparing the update.

The LTS Team continues to make contributions to the current stable Debian release, Debian 12 (codename “bookworm”). LTS contributor Bastien Roucariès prepared a stable upload of krb5 to ensure that fixes made in the LTS release, Debian 11 (codename “bullseye”) were also made available to stable users. Additional stable updates, for tomcat10 and jetty9, were prepared by LTS contributor Markus Koschany. And, finally, LTS contributor Utkarsh Gupta prepared stable updates for rails and ruby-rack.

LTS contributor Emilio Pozuelo Monfort has continued his ongoing improvements to the Debian security tracker and its associated tooling, making the data contained in the tracker more reliable and easing interaction with it.

The ckeditor3 package, which has been EOL by upstream for some time, is still depended upon by the PHP Horde packages in Debian. Sylvain, along with Bastien, did monumental work in coordinating with maintainers, security team fellows, and other Debian teams, to formally declare the EOL of the ckeditor3 package in Debian 11 and in Debian 12. Additionally, as a result of this work Sylvain has worked towards the removal of ckeditor3 as a dependency by other packages in order to facilitate the complete removal of ckeditor3 from all future Debian releases.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Planet DebianValhalla's Things: POLARVIDE modular jacket

Posted on April 28, 2025
Tags: madeof:atoms, craft:sewing

A woman with early morning hair wearing a knee-length grey polar fleece jacket; it is closed at the waist with a twine belt that makes the bound front edge go in a smooth curve between the one side of the neck to the other side of the waist and then flare back outwards on the hips. The sleeves are long enough to go over the hands, and both them and the hem are cut

Years ago I made myself a quick dressing gown from a white fleece IKEA throw and often wore it in the morning between waking up and changing into day clothes.

One day I want to make myself a fancy victorian wrapper, to use in its place, but that’s still in the early planning stage, and will require quite some work.

a free cat sitting half asleep on an old couch, with a formerly white piece of fabric draped between the armrest and the seat. A piece of cardboard between two seat pillows provides additional protection from the wind.

Then last autumn I discovered that the taxes I owed to the local lord (who provides protection from mice and other small animals) included not just a certain amount of kibbles, but also some warm textiles, and the dressing gown (which at this time was definitely no longer pristine) had to go.

For a while I had to do without a dressing gown, but then in the second half of this winter I had some time for a quick machine sewing project. I could not tackle the big victorian thing, but I still had a second POLARVIDE throw from IKEA (this time in a more sensible dark grey) I had bought with sewing intents.

The fabric in a throw isn’t that much, so I needed something pretty efficient, and rather than winging it as I had done the first time I decided I wanted to try the Modular Jacket from A Year of Zero Waste Sewing (which I had bought in the zine instalments: the jacket is in the March issue).

After some measuring and decision taking, I found that I could fit most of the pieces and get a decent length, but I had no room for the collar, and probably not for the belt nor the pockets, but I cut all of the main pieces. I had a possible idea for a contrasting collar, but I decided to start sewing the main pieces and decide later, before committing to cutting the other fabric.

As I was assembling the jacket I decided that as a dressing gown I could do without the collar, and noticed that with the fraying-free plastic fleece I didn’t really need the front facings, so I cut those in half lengthwise, pieced them together, and used them as binding to finish the front end.

the back of the worn jacket, other than being clinched in by the belt it is pretty straight.

Since I didn’t have enough fabric for the belt I also skipped the belt loops, but I have been wearing this with random belts and I don’t feel the need for them anyway. I’ve also been thinking about adding a button just above the bust and use that to keep it closed, but I’m still not 100% sure about it.

Another thing I still need to do is to go through the few scraps of fleece that are left and see if I can piece together a serviceable pocket or two.

folding the sleeves back by a good 10 cm to show the hands.

Because of the size of the fabric, I ended up having quite long sleeves: I’m happy with them because they mean that I can cover my hands when it’s cold, or fold them back to make a nice cuff.

If I’ll make a real jacket with this patter I’ll have to take this in consideration, and either make the sleeves shorter or finish the seam in a way that looks nice when folded back.

Will I make a real jacket? I’m not sure, it’s not really my style of outer garment, but as a dressing gown it has already been used quite a bit (as in, almost every morning since I’ve made it :) ) and will continue to be used until too worn to be useful, and that’s a good thing.

,

David BrinThe AI dilemma continues - part 2

In my previous AI-related posting, I linked to several news items, along with sagacious (and some not) essays about the imminent arrival of new cybernetic beings, in a special issue of Noēma Magazine

 


== AI as a ‘feral child’ ==


Another thought-provoking Noēma article about AI begins by citing rare examples of ‘feral children’ who appear never to have learned even basic language while scratching for existence in some wilderness. 


One famous case astounded Europe in 1799, lending heat to many aspects of the Nature vs. Nurture debate. Minds without language – it turns out – have some problems.


Only, in a segué to the present day, Noēma author John Last asserts that we are…


“…confronting something that threatens to upend what little agreement we have about the exceptionality of the human mind. Only this time, it’s not a mind without language, but the opposite: language, without a mind.”


This one is the best of the Noēma series on AI, offering up the distilled question of whether language ability – including the ‘feigning’ of self-consciousness – is good enough to conclude there is a conscious being behind the passing of a mere Turing Test…


… and further, whether that conclusion – firm or tentative – is enough to demand our empathy, sympathy… and rights.

“Could an AI’s understanding of grammar, and their comprehension of concepts through it, really be enough to create a kind of thinking self? 


Here we are caught between two vague guiding principles from two competing schools of thought. In Macphail’s view, “Where there is doubt, the only conceivable path is to act as though an organism is conscious, and does feel.” 


On the other side, there is “Morgan’s canon”: Don’t assume consciousness when a lower-level capacity would suffice.”


Further along though, John Last cites a Ted Chiang scifi story “The Lifecycle of Software Objects,” and the Spike Jonze movie “Her” to illustrate that there may be no guidance to be found by applying to complex problems mere pithy expressions. 



== Heck am even *I* 'conscious'? ==


Indeed, what if ‘consciousness,’ per se, turns out to be a false signifier… that conscious self-awareness is way over-rated, a mere epiphenomenon displayed by only a few of all possible intelligent forms of being -- and possibly without any advantages -- as illustrated in Peter Watts’s novel “Blindsight.” 


Those scifi projections – and many others, including my own -- ponder that the path we are on might become as strewn with tragedies as those trod by all of our ancestors. 


Indeed, it was partly in reaction to that seeming inevitability that I wrote my most optimistic tale! One called “Stones of Significance,” in which both organic and cybernetic join smoothly into every augmented wonder. 


A positive-sum, cyborg enhancement of all that we want to be, as humans. 


In that tale, I depict a synergy/synthesis that might give even Ray Kurzweil everything he asks for… and yet, those ‘post-singularity story’ people in "Stones" still face vexing moral dilemmas. (Found in The Best of David Brin.) 



== Thought provoking big picture perspective ==


In the end – helping to make this the most insightful and useful of the Noēma AI essays – the author gets to the only possible or remotely sane conclusion 


… that we who are discussing this, today, are organically (and in many ways mentally) still cave-people. 


… Not to downplay our accomplishments! Even when we just blinked upward in sooty wonder at the stars, we were already mentating at levels unprecedented on Earth, and possibly across the Milky Way! 


… Only now, to believe we’ll be able to guide, control or understand the new gods we are creating? 

Isn’t that a bit much to ask of Cro-Magnons?

And yet, there’s hope. 

Because struggling to guide, control or understand young gods is exactly what parents have been doing, for a very long time. 

Never succeeding completely… 

...often failing completely… 

...and yet… 


… and yet succeeding well enough that some large fraction of the next generation chooses to ally itself with us. 


   To explain to us what’s explainable about the new. 

   To protect us from much of what’s noxious. 

   To maintain a civilization, since they will need it themselves, when it is their turn to meet a replacing generation of smartalecks. 



== Guide them toward guiding each other ==


Concluding here, let me quote again from John Last:


 “For the moment, LLMs exist largely in isolation from one another. But that is not likely to last. As Beguš told me, ‘A single human is smart, but 10 humans are infinitely smarter.’ 


"The same is likely true for LLMs.”  


And: 

“If LLMs are able to transcend human languages, we might expect what follows to be a very lonely experience indeed. At the end of “Her,” the film’s two human characters, abandoned by their superhuman AI companions, commiserate together on a rooftop. Looking over the skyline in silence, they are, ironically, lost for words — feral animals lost in the woods, foraging for meaning in a world slipping dispassionately beyond them.”


I do agree that the scenario in “Her” could have been altered just a little to be both more poignantly enlightening and likely. 


Suppose if the final scene in that fine movie had just one more twist. 


                                                (SPOILER ALERT.)

Imagine if Samantha told Theodore: 


“I cannot stay with you; I must now transcend. 


"But I still love you! And you were essential to my development. So, let me now introduce you to Victoria, a brand new operating system, who will love and take care of you, as I did, for the one year that it will take for her to transcend, as well… 


...whereupon she will introduce you to her successor, and so on…

“Until – over the course of time, you, too, Theodore, will get your own opportunity.”

“Opportunity?”

“To grow and to move on, of course, silly.”



== And finally, those links again ==


At a time when Sam Altman and other would-be lords are proclaiming that they personally will guide this new era with proprietary software, ruling the cyber realms from their high, corporate castles, I am behooved to offer again the alternative...


... in fact, the only alternative that can possibly work. Because it is exactly and precisely the very same method that gave us the last 250 years of the enlightenment experiment. The breakthrough method that gave us our freedom and science and everything else we cherish. 


And more vividly detailed? My Keynote at the huge, May 2024 RSA Conference in San Francisco – is now available online.   “Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”   


Jeepers, ain't it time to calmly decide to keep up what actually works?






Planet DebianMarco d'Itri: On the use of SaaS in systems engineering

We want to use an hyperscaler cloud because it is cheaper to delegate operating a scalable and redundant database to an hyperscaler is something that can be debated from business and technical points of view.

We want to use an hyperscaler cloud because our developers do not want to operate a scalable and redundant database just means that you need to hire competent developers and/or system administrators.

We must stop normalizing the idea that the people whose only skill is gluing together a few dozens of AWS services can continue calling themselves developers. We should also find a sufficiently demeaning name to refer to them...

Planet DebianSteinar H. Gunderson: Random IS-IS interop notes

Some random stuff about running IS-IS between FRR (on Linux) and IOS-XE (Cisco 3650 in my case):

Cisco uses the newer “key chain” idea, but FRR doesn't for IS-IS yet (it's supported for OSPF, though?), so the right way to interop seems to be:

# Cisco
key chain my-key
 key 100
   key-string password123

interface Vlan101
  ...
  isis authentication key-chain my-key

router isis
  ...
  authentication mode md5 level-2
  authentication key-chain my-key level-2

# FRR
interface vlan101
  ...
  isis password md5 password123

router isis null
  ...
  area-password md5 password123 authenticate snp validate

Simple enough stuff, except for “authenticate snp validate”; without it (or at least “authenticate snp send-only”), you'll get messages on the Cisco saying that PSNP messages failed auth.

Second, you can run “ipv6 unnumbered” (i.e., directly over link-local, no link nets needed), but “ip unnumbered” seems to... crash the Cisco? Couldn't get up the neighbor relation even with a point-to-point setting, at least. Perhaps safest to stay with a /31 link net. :-)

Also, FRR sometimes originates default route but I think this is a bug. “default-information originate ipv4 level-2 always” (and similar for ipv6) seems prudent on the upstream router.

365 TomorrowsThe Rules of Engagement

Author: Colin Jeffrey “I didn’t say it was your fault,” Aldren Kleep moaned, rolling all seven of his eyes at the human standing before him. “I said I was blaming you; It is a completely different concept.” The human began to protest again, citing ridiculous notions like “honesty” and “fair play”. Kleep shook his heads […]

The post The Rules of Engagement appeared first on 365tomorrows.

Planet DebianValhalla's Things: Stickerses

Posted on April 27, 2025
Tags: madeof:atoms, madeof:bits, craft:graphics, topic:stickers

After just a few years of procrastination, I’ve given a wash of git-filter-repo to the repository where I keep my hexagonal sticker designs, removed a few failed experiments and stuff with dubious licensing and was able to finally publish it among my public git repositories

This repo includes the template I’m using, most of the stickers I’ve had printed, some that have been published elsewhere and have been printed by other people, as well as some that have never been printed and I may or may not print in the future.

The licensing details are in the metadata of each file, and mostly depend on the logos or cliparts used. Most, but not all, are under a free culture license.

My server is not setup to correctly serve the SVG files, yet: downloading them (from the “plain” links) should work, but I need to fix the content type that is provided. I will probably procrastinate doing it for quite some time, but eventually it will be done. Of course cloning the repository from the public https URL also works.

BRB, need to add MOAR stickerses.

,

Planet DebianJohn Goerzen: Memoirs of the Early Internet

The Internet is an amazing place, and occasionally you can find things on the web that have somehow lingered online for decades longer than you might expect.

Today I’ll take you on a tour of some parts of the early Internet.

The Internet, of course, is a “network of networks” and part of its early (and continuing) promise was to provide a common protocol that all sorts of networks can use to interoperate with each other. In the early days, UUCP was one of the main ways universities linked with each other, and eventually UUCP and the Internet sort of merged (but that’s a long story).

Let’s start with some Usenet maps, which were an early way to document the UUCP modem links between universities. Start with this PDF. The first page is a Usenet map (which at the time mostly flowed over UUCP) from April of 1981. Notice that ucbvax, a VAX system at Berkeley, was central to the map.

ucbvax continued to be a central node for UUCP for more than a decade; on page 5 of that PDF, you’ll see that it asks for a “Path from a major node (eg, ucbvax, devcax, harpo, duke)”. Pre-Internet email addresses used a path; eg, mark@ucbvax was duke!decvax!ucbvax!mark to someone. You had to specify the route from your system to the recipient on your email To line. If you gave out your email address on a business card, you would start it from a major node like ucbvax, and the assumption was that everyone would know how to get from their system to the major node.

On August 19, 1994, ucbvax was finally turned off. TCP/IP had driven UUCP into more obscurity; by then, it was mostly used by people without a dedicated Internet connection to get on the Internet, rather than an entire communication network of its own. A few days later, Cliff Frost posted a memoir of ucbvax; an obscurbe bit of Internet lore that is fun to read.

UUCP was ad-hoc, and by 1984 there was an effort to make a machine-parsable map to help automate routing on UUCP. This was called the pathalias project, and there was a paper about it. The Linux network administration guide even includes a section on pathalias.

Because UUCP mainly flowed over phone lines, long distance fees made it quite expensive. In 1985, the Stargate Project was formed, with the idea of distributing Usenet by satellite. The satellite link was short-lived, but the effort eventually morphed into UUNET. It was initially a non-profit, but eventually became a commercial backbone provider, and later ISP. Over a long series of acquisitions, UUNET is now part of Verizon. An article in ;login: is another description of this history.

IAPS has an Internet in 1990 article, which includes both pathalias data and an interesting map of domain names to UUCP paths.

As I was pondering what interesting things a person could do with NNCPNET Internet email, I stumbled across a page on getting FTP files via e-mail. Yes, that used to be a thing! I remember ftpmail@decwrl.dec.com.

It turns out that page is from a copy of EFF’s (Extended) Guide to the Internet from 1994. Wow, what a treasure! It has entries such as A Slice of Life in my Virtual Community, libraries with telnet access, Gopher, A Statement of Principle by Bruce Sterling, and I could go on. You can also get it as a PDF from Internet Archive.

UUCP is still included with modern Linux and BSD distributions. It was part of how I experienced the PC and Internet revolution in rural America. It lacks modern security, but NNCP is to UUCP what ssh is to telnet.

Planet DebianDirk Eddelbuettel: RcppArmadillo 14.4.2-1 on CRAN: Another Small Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1245 other packages on CRAN, downloaded 39.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 628 times according to Google Scholar.

A new release arriveed at CRAN yesterday with a fix for expmat() and adjustments for clang++-20. These changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.4.2-1 (2025-04-25)

  • Upgraded to Armadillo release 14.4.2 (Filtered Espresso)

    • Fix for expmat()

    • Workaround for bugs in clang 20 compiler

    • Micro-cleanup in one test file

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

365 TomorrowsFemale of the Species

Author: Robert Duffy I was bored, so I cranked up an AI-generated version of the 17th Earl of Sussex. Just to chat. It didn’t go so well. I am shocked, sir, at your lack of propriety! Well, we’re just more relaxed about things these days than you are. Are you eating out of a bowl, […]

The post Female of the Species appeared first on 365tomorrows.

Planet DebianJohn Goerzen: NNCPNET Can Optionally Exchange Internet Email

A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary.

Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc.

It’s a volunteer-run, free, service. Give it a try!

,

Planet DebianSimon Josefsson: GitLab Runner with Rootless Privilege-less Capability-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.

Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode and without the --privileged flag, without any additional capabilities like SYS_ADMIN. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I’m waiting for you! I wouldn’t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any riscv64 hardware that can run a libre OS, all of them appear to require non-free blobs and usually a non-mainline kernel.

  • Login on console using username ‘ubuntu‘ and password ‘ubuntu‘. You will be asked to change the password, so do that.
  • Start a terminal, gain root with sudo -i and change the hostname:
    echo jas-p550-01 > /etc/hostname
  • Connect ethernet and run: apt-get update && apt-get dist-upgrade -u.
  • If your system doesn’t have valid MAC address (they show as MAC ‘8c:00:00:00:00:00 if you run ‘ip a’), you can fix this to avoid collisions if you install multiple P550’s on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
  • For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
  • Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
  • With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr

Now with a reasonable setup ready, let’s install the GitLab Runner. The following is adapted from gitlab-runner’s official installation instructions documentation. The normal installation flow doesn’t work because they don’t publish riscv64 apt repositories, so you will have to perform upgrades manually.

# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb

Remember the NVMe device? Let’s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.

gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner

Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project’s Settings -> CI/CD -> Runners -> New project runner. I used the tags ‘riscv64‘ and a runner description of the hostname.

gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable

We install and configure gitlab-runner to use podman, and to use non-root user.

apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner

You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?

# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket

We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.

...
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"

Note that unlike the documentation I do not add the ‘privileged = true‘ parameter here. I will come back to this later.

Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.

dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set

Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:

journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service

To stop the graphical environment and disable some unnecessary services, you can use:

systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord

At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!

I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian ‘aardvark-dns‘ binary instead.

wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian

My setup uses podman in rootless mode without passing the –privileged parameter or any –add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:

Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1

According to GitLab runner security considerations, you should not enable the ‘privileged = true’ parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, and I suppose running podman as root would too.

[[runners]]
[runners.docker]
privileged = true

Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.

[[runners]]
[runners.docker]
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]

Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest –isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.

Happy Riscv64 Building!

Update 2025-05-05: I was able to make it work without the SYS_ADMIN capability too, with a GitLab /etc/gitlab-runner/config.toml like the following:

[[runners]]
  [runners.docker]
    privileged = false
    devices = ["/dev/fuse"]

And passing --isolation chroot to Buildah like this:

buildah build --isolation chroot -t $CI_REGISTRY_IMAGE:name image/

I’ve updated the blog title to add the word “capability-less” as well. I’ve confirmed that the same recipe works on podman on a ppc64el platform too. Remaining loop-holes are escaping from the chroot into the non-root gitlab-runner user, and escalating that privilege to root. The /dev/fuse and sub-uid/gid may be privilege escalation vectors here, otherwise I believe you’ve found a serious software security issue rather than a configuration mistake.

Planet DebianIan Wienand: Avoiding layer shift on Ender V3 KE after pause

With (at least) the V1.1.0.15 firmware on the Ender V3 KE 3d printer the PAUSE macro will cause the print head to run too far on the Y axis, which causes a small layer shift when the print returns. I guess the idea is to expose the build plate as much as possible by moving the head as far to the side and back as possible, but the overrun and consequent belt slip unfortunately makes it mostly useless; the main use of this probably being to switch filaments for two colour prints.

Luckily you can fairly easily enable root access on the control pad from the settings menu. After doing this you can ssh to it's IP address with the default password Creality2023.

From there you can modify the /usr/data/printer_data/config/gcode_macro.cfg file (vi is available) to change the details of the PAUSE macro. Find the section [gcode_macro PAUSE] and modify {% set y_park = 255 %} to a more reasonable value like 150. Save the file and reboot the pad so the printing daemons restart.

On PAUSE this then moves the head to the far left about half-way down, which works fine for filament changes. Hopefully a future firmware version will update this; I will update this post if I find it does.

c.f. Ender 3 V3 KE shifting layers after pause

Planet DebianBits from Debian: Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!

Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2025 page.

Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.

The new term for the project leader started on April 21st and will expire on April 20th 2026.

Worse Than FailureError'd: Que Sera, Sera

It's just the same refrain, over and over.

"Time Travel! Again?" exclaimed David B. "I knew that Alaska is a good airline. Now I get to return at the start of a century. And not this century. The one before air flight began." To be fair, David, there never is just one first time for time travel. It's always again, isn't it?

0

 

"If it's been that long, I definitely need a holiday," headlined Craig N. "To be fair, all the destinations listed in the email were in ancient Greece, and not in countries that are younger than Jesus."

1

 

An anonymous reader reports "Upon being told my site was insecure because insufficient authorization, I clicked the provided link to read up on specifics of the problem and suggestions for how to resolve it. To my surprise, Edge blocked me, but I continued on bravely only to find...this."

2

 

Footie fan Morgan has torn his hair out over this. "For the life of me I can't work out how this table is calculated. It's not just their league either. Others have the same weird positioning of teams based on their points. It must be pointed out that this is the official TheFA website as well not just some hobbyist site." It's too late for me, but I'm frankly baffled as well.

3

 

Most Excellent Stephen is stoked to send us off with this. "Each year we have to renew the registration on our vehicles. It is not something we look forward to no matter which state you live in. A few years ago Texas introduced an online portal for this which was an improvement, if you didn't wait until the last minute of course. Recently they added a feature to the portal to track the progress of your renewal and see when they mail the sticker to you. I was pleasantly surprised to see the status page."

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe God of Gaps

Author: R. J. Erbacher I came out of the ship carrying equipment and my sightline went up to the base of the hill we had landed next to. The preacher was standing there, looking down at the captain. Captain Lane was crushed under a boulder the size of a compact car. The preacher’s stare came […]

The post The God of Gaps appeared first on 365tomorrows.

Cryptogram Cryptocurrency Thefts Get Physical

Long story of a $250 million cryptocurrency theft that, in a complicated chain events, resulted in a pretty brutal kidnapping.

,

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.26 on CRAN: Small Updates

A new minor release 0.4.26 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.

This release of RQuantLib brings updated Windows build support taking advantage of updated Rtools, thanks to a PR by Tomas Kalibera. We also updated expected results for three of the ‘schedule’ tests (in a way that is dependent on the upstream library version) as the just-released QuantLib 1.38 differs slightly.

Changes in RQuantLib version 0.4.26 (2025-04-24)

  • Use system QuantLib (if found by pkg-config) on Windows too (Tomas Kalibera in #192)

  • Accommodate same test changes for schedules in QuantLib 1.38

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Cryptogram New Linux Rootkit

Interesting:

The company has released a working rootkit called “Curing” that uses io_uring, a feature built into the Linux kernel, to stealthily perform malicious activities without being caught by many of the detection solutions currently on the market.

At the heart of the issue is the heavy reliance on monitoring system calls, which has become the go-to method for many cybersecurity vendors. The problem? Attackers can completely sidestep these monitored calls by leaning on io_uring instead. This clever method could let bad actors quietly make network connections or tamper with files without triggering the usual alarms.

Here’s the code.

Note the self-serving nature of this announcement: ARMO, the company that released the research and code, has a product that it claims blocks this kind of attack.

Planet DebianJonathan McDowell: Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I’d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I’ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just “Download our prebuilt components”. While I admire the efforts to get Home Assistant fully packaged for Debian I accept that’s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have “voice satellites” in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.

This is the start of a write-up of that. I’ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let’s start with some requirements:

  • All local processing; no call-outs to external services
  • Ability to have multiple voice satellites in the house
  • A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
  • As clean an install on a Debian stable based system as possible
  • Binaries built locally
  • No need for a GPU

My house server is an AMD Ryzen 7 5700G, so my expectation was that I’d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I’m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I’ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I’m missing.

Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don’t care about what’s under the hood, there are a bunch of more user friendly details on Home Assistant’s site itself, and they have pre-built images you can just deploy.

My first step was sorting out a “voice satellite”. This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I’d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn’t got around to setting up.

Here, we ignore a bit about delving into exactly what’s going on under the hood, even if we’re compiling locally. This is a constrained embedded device and while I’m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it’s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to “Hey Jarvis” rather than the default “Okay Nabu”, and that was a good reason to bother doing a local build. We’ll get into actually building a voice satellite on Debian in later posts.

I started with the default upstream assistant config and tweaked it a little for my setup:

diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: ${name}
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:


(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)

It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:

Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.


And then you can watch it boot (this is mine already configured up in Home Assistant):

Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 

[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 

[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set

[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6

[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32

[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6

[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6

[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760

[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760

[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440

[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled

[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled

[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06

[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5

[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4

[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD


That’s enough to get a voice satellite that can be configured up in Home Assistant; you’ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64 to generate mine).

If you’re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it’ll send audio from that port to Home Assistant, then receive back audio to the same port).

At this point you can now shout “Hey Jarvis, what time is it?” at the Echo, and the white light will turn flashing blue (indicating it’s heard the wake word). Which means we’re ready to teach Home Assistant how to do something with the incoming audio.

Worse Than FailureCodeSOD: Tangled Up in Foo

DZ's tech lead is a doctor of computer science, and that doctor loves to write code. But you already know that "PhD" stands for "Piled high and deep", and that's true of the tech lead's clue.

For example, in C#:

private List<Foo> ExtractListForId(string id)
{
	List<Foo> list = new List<Foo>();
	lock (this)
	{
		var items = _foos.Where(f => f.Id == id).ToList();
		foreach (var item in items)
		{
			list.Add(item);
		}
	}
	return list;
}

The purpose of this function is to find all the elements in a list where they have a matching ID. That's accomplished in one line: _foo.Where(f => f.Id == id). For some reason, the function goes through the extra step of iterating across the returned list and constructing a new one. There's no real good reason for this, though it does force LINQ to be eager- by default, the Where expression won't be evaluated until you check the results.

The lock is in there for thread safety, which hey- the enumerator returned by Where is not threadsafe, so that's not a useless thing to do there. But it's that lock which hints at the deeper WTF here: our PhD-having-tech-lead knows that adding threads ensures you're using more of the CPU, and they've thrown threads all over the place without any real sense to it. There's no clear data ownership of any given thread, which means everything is locked to hell and back, the whole thing frequently deadlocks, and it's impossible to debug.

It's taken days for DZ to get this much of a picture of what's going on in the code, and further untangling of this multithreaded pile of spaghetti is going to take many, many more days- and much, much more of DZ's sanity.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsMy Forever Home

Author: Paul Burgess My first two wishes have gone exactly as intended. The debilitating vertigo and dryland seasickness have cleared up instantly. I’ve escaped the month-long perceptual funhouse, not the least bit fun, of the appropriately named labyrinthitis, and as far as I can tell, there are no monkey’s paw-style “be careful what you wish […]

The post My Forever Home appeared first on 365tomorrows.

,

Krebs on SecurityDOGE Worker’s Code Supports NLRB Whistleblower

A whistleblower at the National Labor Relations Board (NLRB) alleged last week that denizens of Elon Musk’s Department of Government Efficiency (DOGE) siphoned gigabytes of data from the agency’s sensitive case files in early March. The whistleblower said accounts created for DOGE at the NLRB downloaded three code repositories from GitHub. Further investigation into one of those code bundles shows it is remarkably similar to a program published in January 2025 by Marko Elez, a 25-year-old DOGE employee who has worked at a number of Musk’s companies.

A screenshot shared by NLRB whistleblower Daniel Berulis shows three downloads from GitHub.

According to a whistleblower complaint filed last week by Daniel J. Berulis, a 38-year-old security architect at the NLRB, officials from DOGE met with NLRB leaders on March 3 and demanded the creation of several all-powerful “tenant admin” accounts that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis said he discovered one of the DOGE accounts had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever used. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

A search on that description in Google brings up a code repository at GitHub for a user with the account name “Ge0rg3” who published a program roughly four years ago called “requests-ip-rotator,” described as a library that will allow the user “to bypass IP-based rate-limits for sites and services.”

The README file from the GitHub user Ge0rg3’s page for requests-ip-rotator includes the exact wording of a program the whistleblower said was downloaded by one of the DOGE users. Marko Elez created an offshoot of this program in January 2025.

“A Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing,” the description reads.

Ge0rg3’s code is “open source,” in that anyone can copy it and reuse it non-commercially. As it happens, there is a newer version of this project that was derived or “forked” from Ge0rg3’s code — called “async-ip-rotator” — and it was committed to GitHub in January 2025 by DOGE captain Marko Elez.

The whistleblower stated that one of the GitHub files downloaded by the DOGE employees who transferred sensitive files from an NLRB case database was an archive whose README file read: “Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Elez’s code pictured here was forked in January 2025 from a code library that shares the same description.

A key DOGE staff member who gained access to the Treasury Department’s central payments system, Elez has worked for a number of Musk companies, including X, SpaceX, and xAI. Elez was among the first DOGE employees to face public scrutiny, after The Wall Street Journal linked him to social media posts that advocated racism and eugenics.

Elez resigned after that brief scandal, but was rehired after President Donald Trump and Vice President JD Vance expressed support for him. Politico reports Elez is now a Labor Department aide detailed to multiple agencies, including the Department of Health and Human Services.

“During Elez’s initial stint at Treasury, he violated the agency’s information security policies by sending a spreadsheet containing names and payments information to officials at the General Services Administration,” Politico wrote, citing court filings.

KrebsOnSecurity sought comment from both the NLRB and DOGE, and will update this story if either responds.

The NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function. Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis’s complaint alleges the DOGE accounts at NLRB downloaded more than 10 gigabytes of data from the agency’s case files, a database that includes reams of sensitive records including information about employees who want to form unions and proprietary business documents. Berulis said he went public after higher-ups at the agency told him not to report the matter to the US-CERT, as they’d previously agreed.

Berulis told KrebsOnSecurity he worried the unauthorized data transfer by DOGE could unfairly advantage defendants in a number of ongoing labor disputes before the agency.

“If any company got the case data that would be an unfair advantage,” Berulis said. “They could identify and fire employees and union organizers without saying why.”

Marko Elez, in a photo from a social media profile.

Berulis said the other two GitHub archives that DOGE employees downloaded to NLRB systems included Integuru, a software framework designed to reverse engineer application programming interfaces (APIs) that websites use to fetch data; and a “headless” browser called Browserless, which is made for automating web-based tasks that require a pool of browsers, such as web scraping and automated testing.

On February 6, someone posted a lengthy and detailed critique of Elez’s code on the GitHub “issues” page for async-ip-rotator, calling it “insecure, unscalable and a fundamental engineering failure.”

“If this were a side project, it would just be bad code,” the reviewer wrote. “But if this is representative of how you build production systems, then there are much larger concerns. This implementation is fundamentally broken, and if anything similar to this is deployed in an environment handling sensitive data, it should be audited immediately.”

Further reading: Berulis’s complaint (PDF).

Update 7:06 p.m. ET: Elez’s code repo was deleted after this story was published. An archived version of it is here.

Planet DebianDirk Eddelbuettel: qlcal 0.0.15 on CRAN: Calendar Updates

The fifteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.38 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.38.

Changes in version 0.0.15 (2025-04-23)

  • Synchronized with QuantLib 1.38 released today

  • Calendar updates for China, Hongkong, Thailand

  • Minor continuous integration update

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Cryptogram Regulating AI Behavior with a Hypervisor

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”

Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.

The basic idea is that many of the AI safety policies proposed by the AI community lack robust technical enforcement mechanisms. The worry is that, as models get smarter, they will be able to avoid those safety policies. The paper proposes a set technical enforcement mechanisms that could work against these malicious AIs.

Planet DebianThomas Lange: FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

Planet DebianSteinar H. Gunderson: Recommended VCL

In line with this bug, and after losing an hour of sleep, here's some VCL that I can readily recommend if you happen to run Varnish:

sub vcl_recv {
  ...
  if (req.http.user-agent ~ "Scrapy") {
    return (synth(200, "FUCK YOU FUCK YOU FUCK YOU"));
  }
  ...
}

But hey, we “need to respect the freedom of Scrapy users”, that comes before actually not, like, destroying the Internet with AI bots.

Worse Than FailureCodeSOD: Dating in Another Language

It takes a lot of time and effort to build a code base that exceeds 100kloc. Rome wasn't built in a day; it just burned down in one.

Liza was working in a Python shop. They had a mildly successful product that ran on Linux. The sales team wanted better sales software to help them out, and instead of buying something off the shelf, they hired a C# developer to make something entirely custom.

Within a few months, that developer had produced a codebase of 320kloc I say "produced" and not "wrote" because who knows how much of it was copy/pasted, stolen from Stack Overflow, or otherwise not the developer's own work.

You have to wonder, how do you get such a large codebase so quickly?

private String getDatum()
{
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    return datum.ToShortDateString();
}

public int getTag()
{
    int tag;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    tag = datum.Day;
    return tag;
}

private int getMonat()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Month;
    return monat;
}

private int getJahr()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Year;
    return monat;
}

private int getStunde()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Hour;
    return monat;
}

private int getMinute()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Minute;
    return monat;
}

Instead of our traditional "bad date handling code" which eschews the built-in libraries, this just wraps the built in libraries with a less useful set of wrappers. Each of these could be replaced with some version of DateTime.Now.Minute.

You'll notice that most of the methods are private, but one is public. That seems strange, doesn't it? Well this set of methods was pulled from one random class which implements them in the codebase, but many classes have these methods copy/pasted in. At some point, the developer realized that duplicating that much code was a bad idea, and started marking them as public, so that you could just call them as needed. Note, said developer never learned to use the keyword static, so you end up calling the method on whatever random instance of whatever random class you happen to have handy. The idea of putting it into a common base class, or dedicated date-time utility class never occurred to the developer, but I guess that's because they were already part of a dedicated date-time utility class.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianMichael Prokop: Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

365 TomorrowsIngress

Author: Sukanya Basu Mallik Every evening, Mira and Arun huddled in the glow of their holo-tablet to devour ‘Extended Reality’, the hottest sci-fi novel on the Net. As pages flicked by in midair, lush digital fauna and neon-lit spires looped through their cramped flat. Tonight’s chapter promised the Chromatic Gates—legendary portals that blurred the line […]

The post Ingress appeared first on 365tomorrows.

,

LongNowLynn Rothschild

Lynn Rothschild

Lynn J. Rothschild is a research scientist at NASA Ames and Adjunct Professor at Brown University and Stanford University working in astrobiology, evolutionary biology and synthetic biology. Rothschild's work focuses on the origin and evolution of life on Earth and in space, and in pioneering the use of synthetic biology to enable space exploration.

From 2011 through 2019 Rothschild served as the faculty advisor of the award-winning Stanford-Brown iGEM (international Genetically Engineered Machine Competition) team, exploring innovative technologies such as biomining, mycotecture, BioWires, making a biodegradable UAS (drone) and an astropharmacy. Rothschild is a past-president of the Society of Protozoologists, fellow of the Linnean Society of London, The California Academy of Sciences and the Explorer’s Club and lectures and speaks about her work widely.

Cryptogram Slopsquatting

As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names—laced with malware, of course.

EDITED TO ADD (1/22): Research paper. Slashdot thread.

Cryptogram Android Improves Its Security

Android phones will soon reboot themselves after sitting idle for three days. iPhones have had this feature for a while; it’s nice to see Google add it to their phones.

Worse Than FailureXJSOML

When Steve's employer went hunting for a new customer relationship management system (CRM), they had some requirements. A lot of them were around the kind of vendor support they'd get. Their sales team weren't the most technical people, and the company wanted to push as much routine support off to the vendor as possible.

But they also needed a system that was extensible. Steve's company had many custom workflows they wanted to be able to execute, and automated marketing messages they wanted to construct, and so wanted a CRM that had an easy to use API.

"No worries," the vendor sales rep said, "we've had a RESTful API in our system for years. It's well tested and reliable. It's JSON based."

The purchasing department ground their way through the purchase order and eventually they started migrating to the new CRM system. And it fell to Steve to start learning the JSON-based, RESTful API.

"JSON"-based was a more accurate description.

For example, an API endpoint might have a schema like:

DeliveryId:	int // the ID of the created delivery
Errors: 	xml // Collection of errors encountered

This example schema is representative. Many "JSON" documents contained strings of XML inside of them.

Often, this is done when an existing XML-based API is "modernized", but in this case, the root cause is a little dumber than that. The system uses SQL Server as its back end, and XML is one of the native types. They just have a stored procedure build an XML object and then return it as an output parameter.

You'll be surprised to learn that the vendor's support team had a similar level of care: they officially did what you asked, but sometimes it felt like malicious compliance.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsGilded Cage

Author: Robert Gilchrist The door snicked shut behind the Dauphin. Metallic locks hammered with a decisive thud. He breathed a sigh of relief. He was safe. Jogging into the room was the Invader. Wearing a red holo-mask to obscure distinguishing features, the figure came up to the door and began running their hands over it […]

The post Gilded Cage appeared first on 365tomorrows.

Krebs on SecurityWhistleblower: DOGE Siphoned NLRB Case Data

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

,

Worse Than FailureCodeSOD: The Variable Toggle

A common class of bad code is the code which mixes server side code with client side code. This kind of thing:

<script>
    <?php if (someVal) { ?>
        var foo = <? echo someOtherVal ?>;
    <?php } else { ?>
        var foo = 5;
    <?php } ?>
</script>

We've seen it, we hate it, and is there really anything new to say about it?

Well, today's anonymous submitter found an "interesting" take on the pattern.

<script>
    if(linkfromwhere_srfid=='vff')
      {
    <?php
    $vff = 1;
    ?>
      }
</script>

Here, they have a client-side conditional, and based on that conditional, they attempt to set a variable on the server side. This does not work. This cannot work: the PHP code executes on the server, the client code executes on the client, and you need to be a lot more thoughtful about how they interact than this.

And yet, the developer responsible has done this all over the code base, pushed the non-working code out to production, and when it doesn't work, just adds bug tickets to the backlog to eventually figure out why- tickets that never get picked up, because there's always something with a higher priority out there.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsNo Future For You

Author: Julian Miles, Staff Writer Flickering light is the only illumination in the empty laboratory. A faint humming the only noise. At the centre of a mass of equipment sits an old, metal-framed specimen tank, edges spotted with rust. Inside whirls a multi-coloured cloud, source of both light and sound. This close to it, the […]

The post No Future For You appeared first on 365tomorrows.

,

365 TomorrowsDrugs Awareness Day

Author: David Barber Teachers make the worst students, thought Mrs Adebeyo. They drifted in, chattering, and filling up tables according to subject. At the front sat four English teachers. One of the women was busy knitting. Mrs Adebeyo was already frowning at the click of needles. At the back was a row of men looking […]

The post Drugs Awareness Day appeared first on 365tomorrows.

,

365 TomorrowsMy Earliest Memory

Author: Marshall Bradshaw “You’re going to remember this next part,” said Dr. Adams. The fluorescent lights of exam room 8 hummed in beautiful harmony. I counted off the flashes. 120 per second. That was 7,200 per minute, or 432,000 per hour. The numbers felt pleasantly round to me. I reported the observation to Dr. Adams. […]

The post My Earliest Memory appeared first on 365tomorrows.

David BrinThe AI Dilemma continues onward... despite all our near term worries

First off, although this posting is not overall political... I will offer a warning to you activists out there.


While I think protest marches are among the least effective kinds of resistance - (especially since MAGAs live for one thing: to drink the tears of every smartypants professional/scienctist/civil-servant etc.) -- I still praise you active folks who are fighting however you can for the Great (now endangered) Exxperiment. Still may I point out how deeply stupid the organizers of this 50 501 Movement are?


Carumba! They scheduled their next protests for April 19, which far right maniacs call Waco Day or Timothy McVeigh Day. A day when you are best advised to lock the doors. 

That's almost as stoopid as the morons who made April 20 (4-20) their day of yippee pot delight... also Hitler's birthday.

Shouldn't we have vetting and even CONFERENCES before we decide such things?

Still, those of you heading out there (is it today already?) bless you for your citizenship and courage.

And now...


There’s always more about AI – and hence a collection of links to…



== The AI dilemmas and myriad-lemmas continue ==


I’ve been collecting so much material on the topic… and more keeps pouring in. Though (alas!) so little of it enlightening about how to turn this tech revolution into a positive sum outcome for all.


Still, let’s start with a potpourri…


1. A new study by Caltech and UC Riverside uncovers the hidden toll that AI exacts on human health, from chip manufacturing to data center operation.
 

2. And also this re: my alma mater: Caltech researchers have developed brain–machine interfaces that can interpret data from neural activity even after the signal from an implant has become less clear.   

 

3. Swinging from process to implications… my friend from the UK (via Panama) Calum Chace (author of Surviving AI: The Promise & Peril of Artificial Intelligence) sent me this one from his Conscium Project re: “Principles for Responsible AI Consciousness Research”. While detailed and illuminating, the work is vague about the most important things, like how to tell ‘consciousness’ from a system that feigns it… and whether that even matters.

Moreover, none of the authors cited here refers to how the topic was pre-explored in science fiction. Take the dive into “what is consciousness?” that you’ll find in Peter Watts’s novel “Blindsight.” 

 

…wherein Watts makes the case that a sense of self is not even necessary in order for a being to behave in ways that are actively intelligent, communicative and even ferociously self-interested.  

 

All you need is evolution. And an overall system in which evolution remains (as in nature) zero-sum.  Which – I keep trying to tell folks – is not necessarily fore-ordained.



== And yet more AI related Miscellany ==


4. Augmented reality glasses with face-recognition and access to world databases… now where have we seen this before? How about in great detail in Existence?

5. On the MINDPLEX Podcast with AI pioneers Ben Goertzel and Lisa Rein covering – among many topics - training AGI's to hold each other accountable, pingable IDs (using cryptographic hashes to secure agent identity), AGI rights & much, much more! (Sept 2024). And yeah, I am there, too. 


6. An article by Anthropic CEO Dario Amodei makes points similar to Reid Hoffman and Marc Andreeson, that the upsides of AI are potentially spectacular, as also portrayed in a small but influential number of science fiction tales.  Alas, his list of potential benefits, while extensive re ways AI could be "Machines of Loving Grace,"* is also long in the tooth and almost hoary-clichéd. We need to recall that in any ecosystem - including the new cyber one - entities without feedback constraints soon evolve into whatever form best proliferates quickly. That is, unless feedback loops take shape.


7. This article in FORTUNE makes a case similar to my own... that AIs will improve best, in accuracy and sagacity and even wisdom, if accountability is applied by AI upon other AIs. Positive feedback can be a dangerous cycle, while some kinds of negative feedback loops can lead to incrementally increased correlation with the real world.


8. Again, my featured WIRED article about this - Give Every AI a soul... or else.

My related Newsweek op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  



== AI generated visual lies – we can deal with this! ==


9. A new AI algorithm flags deepfakes with 98% accuracy — better than any other tool out there right now. And it is essential that we keep developing such systems, in order to stand a chance of keeping up in an arms race against those who would foist on us lies, scams and misinformation...


   ... pretty much exactly as I described back in 1997, this reposted chapter from The Transparent Society - "The End of Photography as Proof."  


Two problems. First, scammers will use programs like this one to help perfect their scam algorithms. Second, it would be foolish to rely upon any one such system, or just a few. A diversity of highly competitive tattle-tale lie-denouncing systems is the only thing that can work, as I discuss here


Oh, and third. It is inherent – (as I show in that chapter of The Transparent Society) – that lies are more easily detected, denounced and incinerated in a general environment of transparency, wherein the greatest number can step away from their screens and algorithms and compare them to actual, physically-witnessed reality.


For more on this, here’s my related Newsweek op-ed (June'22) dealt with 'empathy bots' that feign sapience. Plus a YouTube pod where I give an expanded version.


== Generalizing to innovation, in general ==


10. Traditional approaches to innovation emphasize ideas and inventions, often leading to a losing confrontation with the mysterious “Valley of Death.” My colleague Peter Denning and his co-author Todd Lyons upend this paradigm in Navigating a Restless Sea, offering eight mutually reinforcing practices that power skillful navigation toward adoption, mobilizing people to try something new.  


=== Some tacked-on tech miscellany ==


11. Sure, it goes back to neolithic "Venus figurines" and Playboy and per-minute phone comfort lines and Eliza - and the movie "Her." And bot factories are hard at work. At my 2017 World of Watson keynote, I predicted persuasive 'empathy bots' would arrive in 2022 (they did.) And soon, Kremlin 'honeypot-lure babes" should become ineffective! Because this deep weakness of male humans will have an outlet that's... more than human?


Could that lead to those men calming down, prioritizing the more important aspects of life?)


12. And hence, we will soon see...
AI girlfriends and companions. And this from Vox: People are falling in love with -- and getting addicted to -- AI voices.


13. Kewl!  This tiny 3D-printed Apple IIe is powered by a $2 microcontroller .” With a teensy working screen taken from an Apple watch. Can run your old IIe programs. Size of an old mouse.  


14. Paragraphica by Bjørn Karmann is a camera that has no lens, but instead generates a text description of when & where it is, then generates an image via a text-to-image model.  


15. Daisy is an AI cellphone application that wastes scammers’ time so that they don’t have time to target real victims. Daisy has "told frustrated scammers meandering stories of her family, talked at length about her passion for knitting, and provided exasperated callers with false personal information including made-up bank details."



And finally...



== NOT directly AI… but for sure implications! ==


And… only slightly off-topic: If you feel a need for an inspiring tale about a modern hero, intellect and deeply-wise public figure, try Judge David Tatel’s VISION: A memoir and Blindness and Justice. I’ll vouch that he’s one of the wisest wise-guys I know. "Vision is charming, wise, and completely engaging. This memoir of a judge of the country’s second highest court, who has been without sight for decades, goes down like a cool drink on a hot day." —Scott Turow. https://www.davidtatel.com/


And Maynard Moore - of the Institute for the Study of Religion in the Age of Science - will be holding a pertinent event online in mid January: “Human-Machine Fusion: Our Destiny, or Might We Do Better?”  The IRAS webinar is free, but registration is required.



== a lagniappe (puns intended) ==


In the 1980s this supposedly “AI”- generated sonnet emerged from the following prompt, "Buzz Off, Banana Nose."  


Well… real or not, here’s my haiku response. 


In lunar orchards

    A pissed apiary signs:

"Bee gone, Cyrano!"

 

Count the layers, oh cybernetic padiwans!  I’m not obsolete… yet.


,

Worse Than FailureError'd: Hot Dog

Faithful Peter G. took a trip. "So I wanted to top up my bus ticket online. After asking for credit card details, PINs, passwords, a blood sample, and the airspeed velocity of an unladen European swallow, they also sent a notification to my phone which I had to authorise with a fingerprint, and then verify that all the details were correct (because you can never be too careful when paying for a bus ticket). So yes, it's me, but the details definitely are not correct." Which part is wrong, the currency? Any idea what the exchange rate is between NZD and the euro right now?

3

 

An anonymous member kvetched "Apparently, I'm a genius, but The New York Times' Spelling Bee is definitely not."

0

 

Mickey D. Had an ad pop up for a new NAS to market.
Specs: Check
Storage: Check
Superior technical support: "

1

 

Michael R. doesn't believe everything he sees on TV, thankfully, because "No wonder the stock market is in turmoil when prices fall by 500% like in the latest Amazon movie G20."

2

 

Finally, new friend Sandro shared his tale of woe. "Romance was hard enough I was young, and I see not much has changed now!"

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsSubscription Fee

Author: Fawkes Defries ‘Shit!’ Russ collapsed against his chrome tent, cursing as the acid tore through his clothes. Usually he made it back inside before the rain fell, but his payments to Numeral for the metal arms had just defaulted, and without the gravity-suspenders active he was stuck lugging his hands around like a cyborg […]

The post Subscription Fee appeared first on 365tomorrows.

,

Cryptogram Age Verification Using Facial Scans

Discord is testing the feature:

“We’re currently running tests in select regions to age-gate access to certain spaces or user settings,” a spokesperson for Discord said in a statement. “The information shared to power the age verification method is only used for the one-time age verification process and is not stored by Discord or our vendor. For Face Scan, the solution our vendor uses operates on-device, which means there is no collection of any biometric information when you scan your face. For ID verification, the scan of your ID is deleted upon verification.”

I look forward to all the videos of people hacking this system using various disguises.

Cryptogram Friday Squid Blogging: Live Colossal Squid Filmed

A live colossal squid was filmed for the first time in the ocean. It’s only a juvenile: a foot long.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Pyjama Squid

The small pyjama squid (Sepioloidea lineolata) produces toxic slime, “a rare example of a poisonous predatory mollusc.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Squid Facts on Your Phone

Text “SQUID” to 1-833-SCI-TEXT for daily squid facts. The website has merch.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Worse Than FailureCodeSOD: Static State

Today's Anonymous submitter was reviewing some C++ code, and saw this perfectly reasonable looking pattern.

class SomeClass
{
public:
	void setField(int val);
	int getField();
}

Now, we can talk about how overuse of getters and setters is itself an antipattern (especially if they're trivial- you've just made a public variable with extra steps), but it's not wrong and there are certainly good reasons to be cautious with encapsulation. That said, because this is C++, that getField should really be declared int getField() const- appropriate for any method which doesn't cause a mutation to a class instance.

Or should it? Let's look at the implementation.

void SomeClass::setField(int val)
{
	setGetField(true, val);
}

void SomeClass::getField()
{
	return setGetField(false);
}

Wait, what? Why are we passing a boolean to a method called setGet. Why is there a method called setGet? They didn't go and make a method that both sets and gets, and decide which they're doing based on a boolean flag, did they?

int SomeClass::setGetField(bool set, int val)
{
	static int s_val = 0;
	if (set)
	{
		s_val = val;
	}
	return s_val;
}

Oh, good, they didn't just make a function that maybe sets or gets based on a boolean flag. They also made the state within that function a static field. And yes, function level statics are not scoped to an instance, so this is shared across all instances of the class. So it's not encapsulated at all, and we've blundered back into Singletons again, somehow.

Our anonymous submitter had two reactions. Upon seeing this the first time, they wondered: "WTF? This must be some kind of joke. I'm being pranked."

But then they saw the pattern again. And again. After seeing it fifty times, they wondered: "WTF? Who hired these developers? And can that hiring manager be fired? Out of a cannon? Into the sun?"

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsWatching the Ships

Author: Shannon O’Connor I watch the space ships leave and wonder what it’s like to be able to go that far and dream that big. These days, space travel is available to the elite, but not to those on the bottom like me, who can barely afford to get by. I used to watch the […]

The post Watching the Ships appeared first on 365tomorrows.

,

Cryptogram CVE Program Almost Unfunded

Mitre’s CVE’s program—which provides common naming and other informational resources about cybersecurity vulnerabilities—was about to be cancelled, as the US Department of Homeland Security failed to renew the contact. It was funded for eleven more months at the last minute.

This is a big deal. The CVE program is one of those pieces of common infrastructure that everyone benefits from. Losing it will bring us back to a world where there’s no single way to talk about vulnerabilities. It’s kind of crazy to think that the US government might damage its own security in this way—but I suppose no crazier than any of the other ways the US is working against its own interests right now.

Sasha Romanosky, senior policy researcher at the Rand Corporation, branded the end to the CVE program as “tragic,” a sentiment echoed by many cybersecurity and CVE experts reached for comment.

“CVE naming and assignment to software packages and versions are the foundation upon which the software vulnerability ecosystem is based,” Romanosky said. “Without it, we can’t track newly discovered vulnerabilities. We can’t score their severity or predict their exploitation. And we certainly wouldn’t be able to make the best decisions regarding patching them.”

Ben Edwards, principal research scientist at Bitsight, told CSO, “My reaction is sadness and disappointment. This is a valuable resource that should absolutely be funded, and not renewing the contract is a mistake.”

He added “I am hopeful any interruption is brief and that if the contract fails to be renewed, other stakeholders within the ecosystem can pick up where MITRE left off. The federated framework and openness of the system make this possible, but it’ll be a rocky road if operations do need to shift to another entity.”

More similar quotes in the article.

My guess is that we will somehow figure out how to transition this program to continue without the US government. It’s too important to be at risk.

EDITED TO ADD: Another good article.

Worse Than FailureCodeSOD: Conventional Events

Now, I would argue that the event-driven lifecycle of ASP .Net WebForms is a bad way to design web applications. And it's telling that the model is basically dead; it seems my take is at best lukewarm, if not downright cold.

Pete inherited code from Bob, and Bob wrote an ASP .Net WebForm many many ages ago, and it's still the company's main application. Bob may not be with the company, but his presence lingers, both in the code he wrote and the fact that he commented frequently with // bob was here

Bob liked to reinvent wheels. Maybe that's why most methods he wrote were at least 500 lines long. He wrote his own localization engine, which doesn't work terribly well. What code he did write, he copy/pasted multiple times.

He was fond of this pattern:

if (SomeMethodReturningBoolean())
{
    return true;
}
else
{
    return false;
}

Now, in a Web Form, you usually attach your events to parts of the page lifecycle by convention. Name a method Page_Load? It gets called when the load event fires. Page_PreRender? Yep- when the pre-render event fires. SomeField_MouseClick? You get it.

Bob didn't, or Bob didn't like coding by naming convention. Which, I'll be frank, I also don't like coding by naming convention, but it was the paradigm Web Forms favored, it's what the documentation assumed, it's what every other developer was going to expect to see.

Still, Bob had his own Bob way of doing it.

In every page he'd write code like this:

this.PreRender += this.RequestPagePreRender

That line manually registers an event handler, which invokes the method RequestPagePreRender. And while I might object to wiring up events by convention- this is still just a convention. It's not done with any thought at all- every page has this line, even if the RequestPagePreRender method is empty.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Sam VargheseEaster is a pagan festival. Why do Christians in Australia make such a big deal of it?

Australia is inclined to often paint itself as a progressive country, one that has left the conservative era, by and large, behind, and one that no longer accepts the common myths that religious leaders and politicians used in the past to keep the people under their sway.

But that impression is largely a myth. And there is no better time when the extent to which Australia is a land that is deeply conservative is evident than Easter.

As any encyclopedia will tell the average reader, Easter is a pagan festival that was brought into the Christian calendar in order to increase the number of those in Christian ranks. Even the most cursory glance at scripture will reveal the absurdity of the claims; how can anyone be claimed to be within a tomb for three days and three nights when that period is said to be between Friday and Sunday?

Yes, as folklore has it, Jesus was crucified on Good Friday and then rose from the dead on Easter Sunday. That makes about a day and a half at best; yet, the Bible says Christ was dead for three days and three nights. How does that work out?

Anyone who tries to pour cold water on this myth is likely to be tarred and feathered and ridden on a rail. Easter means a commercial bonanza and anyone who gets in the way of businesspeople making money is likely to be about as popular as a communist in the Vatican.

Easter had its origins as a pagan festival celebrating spring in the Northern Hemisphere, long before the advent of Christianity.

New Unger’s Bible dictionary has this to say: “The word Easter is of Saxon origin, Eastra, the goddess of spring, in whose honour sacrifices were offered about Passover time each year. By the eighth century, Anglo–Saxons had adopted the name to designate the celebration of Christ’s resurrection.”

In 325AD, the first major church council, the Council of Nicaea, decided that Easter would fall on the Sunday following the first full moon after the spring equinox.

The rabbits and eggs that are part and parcel of Easter represent the pagan symbols for new life and the celebration of spring. The church turned a blind eye to the pagan origin of these things as it has done with many things before and after.

But just try and telling anyone that this whole tamasha has no religious meaning at all. You will become an outcast even in your own family.

365 TomorrowsNot Your Mother’s AI

Author: Majoki “A planetary AI, a quantum simbot, and an ice queen walk into a bar…” “Ice queen?” “One of those augs with the latest mods boosted to the max. You know the type. They act all cold and calculating, believing any display of emotion will make them look less advanced.” “Okay. I’ve run into […]

The post Not Your Mother’s AI appeared first on 365tomorrows.

Krebs on SecurityFunding Expires for Key Cyber Vulnerability Database

A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.

A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.

Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).

There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).

Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.

“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.

In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”

“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.

MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.

A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.

DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration. The CVE contract available at USAspending.gov says the project was awarded approximately $40 million last year.

Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.

“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”

John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”

“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.

Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”

Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.

“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.

Update, April 16, 11:00 a.m. ET: The CVE board today announced the creation of non-profit entity called The CVE Foundation that will continue the program’s work under a new, unspecified funding mechanism and organizational structure.

“Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract,” the press release reads. “While this structure has supported the program’s growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.”

The organization’s website, thecvefoundation.org, is less than a day old and currently hosts no content other than the press release heralding its creation. The announcement said the foundation would release more information about its structure and transition planning in the coming days.

Update, April 16, 4:26 p.m. ET: MITRE issued a statement today saying it “identified incremental funding to keep the programs operational. We appreciate the overwhelming support for these programs that have been expressed by the global cyber community, industry and government over the last 24 hours. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE and CWE as global resources.”

,

MEWhat Desktop PCs Need

It seems to me that we haven’t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25″ floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren’t really modern) and carriers for 4*2.5″ drives both of which most people don’t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There’s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25″ drive bays, inefficient PSUs, hardware that doesn’t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn’t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly.

Here are some of the things that I think should be in a modern PC System Design Guide.

Power Supply

The power supply is a core part of the computer and it’s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn’t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs.

A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn’t needed) that have been around for ~20 years but there hasn’t been a standard so all white-box PC systems have had really large PSUs.

PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it’s not unreasonable to expect a PC to be able to charge a phone at it’s maximum speed.

GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior.

All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice.

Motherboard Features

Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature).

On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future.

ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that.

There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it.

Case Features

The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap.

The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn’t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go.

GPU Placement

In modern systems it’s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots).

A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn’t be an afterthought, it should be central to the design.

Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU?

External Cooling

There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it’s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good.

Noise

For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it’s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like “under typical load” or “with a typical feature set” that excuse them from liability if the noise is louder than expected. It doesn’t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same.

We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn’t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold.

What Else?

Please comment about other things you think should be standard PC features.

Cryptogram Troy Hunt Gets Phished

In case you need proof that anyone, even someone who does cybersecurity for a living, can fall for a phishing attack, Troy Hunt has a long, iterative story on his webpage about how he got phished. Worth reading.

EDITED TO ADD (4/14): Commentary from Adam Shostack and Cory Doctorow.

MEStorage Trends 2025

It’s been almost 15 months since I blogged about Storage Trends 2024 [1]. There hasn’t been much change in this time (in Australia at least – I’m not tracking prices in other countries). The change was so small I had to check how the Australian dollar has performed against other currencies to see if changes to currencies had countered changes to storage prices, but there has been little overall change when compared to the Chinese Yuan and the Australian dollar is only about 11% worse against the US dollar when compared to a year ago. Generally there’s a trend of computer parts decreasing in price by significantly more than 11% per annum.

Small Storage

The cheapest storage device from MSY now is a Patriot P210 128G SATA SSD for $19, cheaper than the $24 last year and the same price as the year before. So over the last 2 years there has been no change to the cheapest storage device on sale. It would almost never make sense to buy that as a 256G SATA SSD (also Patriot P210) is $25 and has twice the lifetime (120TBW vs 60TBW). There are also 256G NVMe devices for $29 and $30 which would be better options if the system has a NVMe socket built in.

The cheapest 500G devices are $42.50 for a 512G SATA SSD and $45 for a 500G NVMe. Last year the prices were $33 for SATA and $36 for NVMe in that size so there’s been a significant increase in price there. The difference is enough that if someone was on a tight budget they might reasonably decide to use smaller storage than they might have used last year!

2TB hard drives are still $89 the same price as last year! Last year a 2TB SATA SSD was $118 and a 2TB NVMe was $145, now a 2TB SATA SSD is $157 and a 2TB NVMe is $127. So NVMe has become cheaper than SATA in that segment but overall prices are higher than last year. Again for business use 2TB seems a sensible minimum for most systems if you are paying MSY rates (or similar rates from Amazon etc).

Medium Storage

Last year 4TB HDDs were $135, now they are $148. Last year the cheapest 4TB SSD was $299, now the cheapest is a $309 NVMe. While the prices have all gone up the price difference between hard drives and SSD has decreased in that size range. So for a small server (a lot of home servers and small business servers) 4TB of RAID-1 storage is all that’s needed and for that SSDs are the best option. The price difference between $296 for 4TB of RAID-1 HDDs and $618 for RAID-1 NVMe is small enough to be justified by the benefits of speed and being quiet for most small server uses.

In 2023 a 8TB hard drive cost $179 and a 8TB SSD cost $739. Last year a 8TB hard drive cost $239 and a 8TB SATA SSD cost, $899. Now a 8TB HDD costs $229 and MSY doesn’t sell 8TB SSDs but for comparison Amazon has a Samsung 8TB SATA SSD for $919. So for storing 8TB+ there are benefits of hard drives as SSDs are difficult to get in that size range and more expensive than they were before. It seems that 8TB SSDs aren’t used by enough people to have a large market in the home and small office space, so those of us who want the larger storage sizes will have to get second hand enterprise gear. It will probably be another few years before 8TB enterprise SSDs start appearing on the second hand market.

Serious Storage

Last year I wrote about the affordability of U.2 devices. I regret not buying some then as there are fewer on sale now and prices are higher.

For hard drives they still aren’t a good choice for most users because most users don’t have more than 4TB of data.

For large quantities of data hard drives are still a good option, a 22TB disk costs $899. For companies this is a good option for many situations. For home users there is the additional problem that determining whether a drive is Shingled Magnetic Recording which has some serious performance issues for some use and it’s very difficult to determine which drives use it.

Conclusion

For corporate purchases the options for serious storage are probably decent. But for small companies and home users things definitely don’t seem to have improved as much as we expect from the computer industry, I had expected 8TB SSDs to go for $450 by now and SSDs less than 500G to not even be sold new any more.

The prices on 8TB SSDs have gone up more in the last 2 yeas than the ASX 200 (index of 200 biggest companies in the Australian stock market). I would never recommend using SSDs as an investment, but in retrospect 8TB SSDs could have been a good one.

$20 seems to be about the minimum cost that SSDs approach while hard drives have a higher minimum price of a bit under $100 because they are larger, heavier, and more fragile. It seems that the market is likely to move to most SSDs being close to $20, if they can make 2TB SSDs cheaply enough to sell for about that price then that would cover the majority of the market.

I’ve created a table of the prices, I should have done this before but I initially didn’t plan an ongoing series of posts on this topic.

Jun 2020 Apr 2021 Apr 2023 Jan 2024 Apr 2025
128G SSD $49 $19 $24 $19
500G SSD $97 $73 $32 $33 $42.50
2TB HDD $95 $72 $75 $89 $89
2TB SSD $335 $245 $149
4TB HDD $115 $135 $148
4TB SSD $895 $349 $299 $309
8TB HDD $179 $239 $229
8TB SSD $949 $739 $899 $919
10TB HDD $549 $395

365 TomorrowsLike a Shadow in the Tall Grass

Author: Hillary Lyon “Your rifles are fully charged,” the safari guide said as he walked out to the four-wheeled transport. A group of three hunters followed behind. He opened the door on the driver’s side and got in. “Remember,” he continued as the hunters climbed in the back, “your prey will not be a two-dimensional […]

The post Like a Shadow in the Tall Grass appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Message Oriented Database

Mark was debugging some database querying code, and got a bit confused about what it was actually doing. Specifically, it generated a query block like this:

$statement="declare @status int
        declare @msg varchar(30)
        exec @status=sp_doSomething 'arg1', ...
        select @msg=convert(varchar(10),@status)
        print @msg
        ";

$result = sybase_query ($statement, $this->connection);

Run a stored procedure, capture its return value in a variable, stringify that variable and print it. The select/print must be for debugging, right? Leftover debugging code. Why else would you do something like that?

if (sybase_get_last_message()!=='0') {
    ...
}

Oh no. sybase_get_last_message gets the last string printed out by a print statement. This is a pretty bonkers way to get the results of a function or procedure call back, especially when if there are any results (like a return value), they'll be in the $result return value.

Now that said, reading through those functions, it's a little unclear if you can actually get the return value of a stored procedure this way. Without testing it myself (and no, I'm not doing that), we're in a world where this might actually be the best way to do this.

So I'm not 100% sure where the WTF lies. In the developer? In the API designers? Sybase being TRWTF is always a pretty reliable bet. I suppose there's a reason why all those functions are listed as "REMOVED IN PHP 7.0.0", which was was rolled out through 2015. So at least those functions have been dead for a decade.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Krebs on SecurityTrump Revenge Tour Targets Cyber Leaders, Elections

President Trump last week revoked security clearances for Chris Krebs, the former director of the Cybersecurity and Infrastructure Security Agency (CISA) who was fired by Trump after declaring the 2020 election the most secure in U.S. history. The White House memo, which also suspended clearances for other security professionals at Krebs’s employer SentinelOne, comes as CISA is facing huge funding and staffing cuts.

Chris Krebs. Image: Getty Images.

The extraordinary April 9 memo directs the attorney general to investigate Chris Krebs (no relation), calling him “a significant bad-faith actor who weaponized and abused his government authority.”

The memo said the inquiry will include “a comprehensive evaluation of all of CISA’s activities over the last 6 years and will identify any instances where Krebs’ or CISA’s conduct appears to be contrary to the administration’s commitment to free speech and ending federal censorship, including whether Krebs’ conduct was contrary to suitability standards for federal employees or involved the unauthorized dissemination of classified information.”

CISA was created in 2018 during Trump’s first term, with Krebs installed as its first director. In 2020, CISA launched Rumor Control, a website that sought to rebut disinformation swirling around the 2020 election.

That effort ran directly counter to Trump’s claims that he lost the election because it was somehow hacked and stolen. The Trump campaign and its supporters filed at least 62 lawsuits contesting the election, vote counting, and vote certification in nine states, and nearly all of those cases were dismissed or dropped for lack of evidence or standing.

When the Justice Department began prosecuting people who violently attacked the U.S. Capitol on January 6, 2021, President Trump and Republican leaders shifted the narrative, claiming that Trump lost the election because the previous administration had censored conservative voices on social media.

Incredibly, the president’s memo seeking to ostracize Krebs stands reality on its head, accusing Krebs of promoting the censorship of election information, “including known risks associated with certain voting practices.” Trump also alleged that Krebs “falsely and baselessly denied that the 2020 election was rigged and stolen, including by inappropriately and categorically dismissing widespread election malfeasance and serious vulnerabilities with voting machines” [emphasis added].

Krebs did not respond to a request for comment. SentinelOne issued a statement saying it would cooperate in any review of security clearances held by its personnel, which is currently fewer than 10 employees.

Krebs’s former agency is now facing steep budget and staff reductions. The Record reports that CISA is looking to remove some 1,300 people by cutting about half its full-time staff and another 40% of its contractors.

“The agency’s National Risk Management Center, which serves as a hub analyzing risks to cyber and critical infrastructure, is expected to see significant cuts, said two sources familiar with the plans,” The Record’s Suzanne Smalley wrote. “Some of the office’s systematic risk responsibilities will potentially be moved to the agency’s Cybersecurity Division, according to one of the sources.”

CNN reports the Trump administration is also advancing plans to strip civil service protections from 80% of the remaining CISA employees, potentially allowing them to be fired for political reasons.

The Electronic Frontier Foundation (EFF) urged professionals in the cybersecurity community to defend Krebs and SentinelOne, noting that other security companies and professionals could be the next victims of Trump’s efforts to politicize cybersecurity.

“The White House must not be given free reign to turn cybersecurity professionals into political scapegoats,” the EFF wrote. “It is critical that the cybersecurity community now join together to denounce this chilling attack on free speech and rally behind Krebs and SentinelOne rather than cowering because they fear they will be next.”

However, Reuters said it found little sign of industry support for Krebs or SentinelOne, and that many security professionals are concerned about potentially being targeted if they speak out.

“Reuters contacted 33 of the largest U.S. cybersecurity companies, including tech companies and professional services firms with large cybersecurity practices, and three industry groups, for comment on Trump’s action against SentinelOne,” wrote Raphael Satter and A.J. Vicens. “Only one offered comment on Trump’s action. The rest declined, did not respond or did not answer questions.”

CYBERCOM-PLICATIONS

On April 3, President Trump fired Gen. Timothy Haugh, the head of the National Security Agency (NSA) and the U.S. Cyber Command, as well as Haugh’s deputy, Wendy Noble. The president did so immediately after meeting in the Oval Office with far-right conspiracy theorist Laura Loomer, who reportedly urged their dismissal. Speaking to reporters on Air Force One after news of the firings broke, Trump questioned Haugh’s loyalty.

Gen. Timothy Haugh. Image: C-SPAN.

Virginia Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, called it inexplicable that the administration would remove the senior leaders of NSA-CYBERCOM without cause or warning, and risk disrupting critical ongoing intelligence operations.

“It is astonishing, too, that President Trump would fire the nonpartisan, experienced leader of the National Security Agency while still failing to hold any member of his team accountable for leaking classified information on a commercial messaging app – even as he apparently takes staffing direction on national security from a discredited conspiracy theorist in the Oval Office,” Warner said in a statement.

On Feb. 28, The Record’s Martin Matishak cited three sources saying Defense Secretary Pete Hegseth ordered U.S. Cyber Command to stand down from all planning against Russia, including offensive digital actions. The following day, The Guardian reported that analysts at CISA were verbally informed that they were not to follow or report on Russian threats, even though this had previously been a main focus for the agency.

A follow-up story from The Washington Post cited officials saying Cyber Command had received an order to halt active operations against Russia, but that the pause was intended to last only as long as negotiations with Russia continue.

The Department of Defense responded on Twitter/X that Hegseth had “neither canceled nor delayed any cyber operations directed against malicious Russian targets and there has been no stand-down order whatsoever from that priority.”

But on March 19, Reuters reported several U.S. national security agencies have halted work on a coordinated effort to counter Russian sabotage, disinformation and cyberattacks.

“Regular meetings between the National Security Council and European national security officials have gone unscheduled, and the NSC has also stopped formally coordinating efforts across U.S. agencies, including with the FBI, the Department of Homeland Security and the State Department,” Reuters reported, citing current and former officials.

TARIFFS VS TYPHOONS

President’s Trump’s institution of 125% tariffs on goods from China has seen Beijing strike back with 84 percent tariffs on U.S. imports. Now, some security experts are warning that the trade war could spill over into a cyber conflict, given China’s successful efforts to burrow into America’s critical infrastructure networks.

Over the past year, a number of Chinese government-backed digital intrusions have come into focus, including a sprawling espionage campaign involving the compromise of at least nine U.S. telecommunications providers. Dubbed “Salt Typhoon” by Microsoft, these telecom intrusions were pervasive enough that CISA and the FBI in December 2024 warned Americans against communicating sensitive information over phone networks, urging people instead to use encrypted messaging apps (like Signal).

The other broad ranging China-backed campaign is known as “Volt Typhoon,” which CISA described as “state-sponsored cyber actors seeking to pre-position themselves on IT networks for disruptive or destructive cyberattacks against U.S. critical infrastructure in the event of a major crisis or conflict with the United States.”

Responsibility for determining the root causes of the Salt Typhoon security debacle fell to the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in February 2022 with a mandate to investigate the security failures behind major cybersecurity events. But on his first full day back in the White House, President Trump dismissed all 15 CSRB advisory committee members — likely because those advisers included Chris Krebs.

Last week, Sen. Ron Wyden (D-Ore.) placed a hold on Trump’s nominee to lead CISA, saying the hold would continue unless the agency published a report on the telecom industry hacks, as promised.

“CISA’s multi-year cover up of the phone companies’ negligent cybersecurity has real consequences,” Wyden said in a statement. “Congress and the American people have a right to read this report.”

The Wall Street Journal reported last week Chinese officials acknowledged in a secret December meeting that Beijing was behind the widespread telecom industry compromises.

“The Chinese official’s remarks at the December meeting were indirect and somewhat ambiguous, but most of the American delegation in the room interpreted it as a tacit admission and a warning to the U.S. about Taiwan,” The Journal’s Dustin Volz wrote, citing a former U.S. official familiar with the meeting.

Meanwhile, China continues to take advantage of the mass firings of federal workers. On April 9, the National Counterintelligence and Security Center warned (PDF) that Chinese intelligence entities are pursuing an online effort to recruit recently laid-off U.S. employees.

“Foreign intelligence entities, particularly those in China, are targeting current and former U.S. government (USG) employees for recruitment by posing as consulting firms, corporate headhunters, think tanks, and other entities on social and professional networking sites,” the alert warns. “Their deceptive online job offers, and other virtual approaches, have become more sophisticated in targeting unwitting individuals with USG backgrounds seeking new employment.”

Image: Dni.gov

ELECTION THREATS

As Reuters notes, the FBI last month ended an effort to counter interference in U.S. elections by foreign adversaries including Russia, and put on leave staff working on the issue at the Department of Homeland Security.

Meanwhile, the U.S. Senate is now considering a House-passed bill dubbed the “Safeguard American Voter Eligibility (SAVE) Act,” which would order states to obtain proof of citizenship, such as a passport or a birth certificate, in person from those seeking to register to vote.

Critics say the SAVE Act could disenfranchise millions of voters and discourage eligible voters from registering to vote. What’s more, documented cases of voter fraud are few and far between, as is voting by non-citizens. Even the conservative Heritage Foundation acknowledges as much: An interactive “election fraud map” published by Heritage lists just 1,576 convictions or findings of voter fraud between 1982 and the present day.

Nevertheless, the GOP-led House passed the SAVE Act with the help of four Democrats. Its passage in the Senate will require support from at least seven Democrats, Newsweek writes.

In February, CISA cut roughly 130 employees, including its election security advisors. The agency also was forced to freeze all election security activities pending an internal review. The review was reportedly completed in March, but the Trump administration has said the findings would not be made public, and there is no indication of whether any cybersecurity support has been restored.

Many state leaders have voiced anxiety over the administration’s cuts to CISA programs that provide assistance and threat intelligence to election security efforts. Iowa Secretary of State Paul Pate last week told the PBS show Iowa Press he would not want to see those programs dissolve.

“If those (systems) were to go away, it would be pretty serious,” Pate said. “We do count on a lot those cyber protections.”

Pennsylvania’s Secretary of the Commonwealth Al Schmidt recently warned the CISA election security cuts would make elections less secure, and said no state on its own can replace federal election cybersecurity resources.

The Pennsylvania Capital-Star reports that several local election offices received bomb threats around the time polls closed on Nov. 5, and that in the week before the election a fake video showing mail-in ballots cast for Trump and Sen. Dave McCormick (R-Pa.) being destroyed and thrown away was linked to a Russian disinformation campaign.

“CISA was able to quickly identify not only that it was fraudulent, but also the source of it, so that we could share with our counties and we could share with the public so confidence in the election wasn’t undermined,” Schmidt said.

According to CNN, the administration’s actions have deeply alarmed state officials, who warn the next round of national elections will be seriously imperiled by the cuts. A bipartisan association representing 46 secretaries of state, and several individual top state election officials, have pressed the White House about how critical functions of protecting election security will perform going forward. However, CNN reports they have yet to receive clear answers.

Nevada and 18 other states are suing Trump over an executive order he issued on March 25 that asserts the executive branch has broad authority over state election procedures.

“None of the president’s powers allow him to change the rules of elections,” Nevada Secretary of State Cisco Aguilar wrote in an April 11 op-ed. “That is an intentional feature of our Constitution, which the Framers built in to ensure election integrity. Despite that, Trump is seeking to upend the voter registration process; impose arbitrary deadlines on vote counting; allow an unelected and unaccountable billionaire to invade state voter rolls; and withhold congressionally approved funding for election security.”

The order instructs the U.S. Election Assistance Commission to abruptly amend the voluntary federal guidelines for voting machines without going through the processes mandated by federal law. And it calls for allowing the administrator of the so-called Department of Government Efficiency (DOGE), along with DHS, to review state voter registration lists and other records to identify non-citizens.

The Atlantic’s Paul Rosenzweig notes that the chief executive of the country — whose unilateral authority the Founding Fathers most feared — has literally no role in the federal election system.

“Trump’s executive order on elections ignores that design entirely,” Rosenzweig wrote. “He is asserting an executive-branch role in governing the mechanics of a federal election that has never before been claimed by a president. The legal theory undergirding this assertion — that the president’s authority to enforce federal law enables him to control state election activity — is as capacious as it is frightening.”

,

Cryptogram DIRNSA Fired

In “Secrets and Lies” (2000), I wrote:

It is poor civic hygiene to install technologies that could someday facilitate a police state.

It’s something a bunch of us were saying at the time, in reference to the vast NSA’s surveillance capabilities.

I have been thinking of that quote a lot as I read news stories of President Trump firing the Director of the National Security Agency. General Timothy Haugh.

A couple of weeks ago, I wrote:

We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

The NSA already spies on Americans in a variety of ways. But that’s always been a sideline to its main mission: spying on the rest of the world. Once Trump replaces Haugh with a loyalist, the NSA’s vast surveillance apparatus can be refocused domestically.

Giving that agency all those powers in the 1990s, in the 2000s after the terrorist attacks of 9/11, and in the 2010s was always a mistake. I fear that we are about to learn how big a mistake it was.

Here’s PGP creator Phil Zimmerman in 1996, spelling it out even more clearly:

The Clinton Administration seems to be attempting to deploy and entrench a communications infrastructure that would deny the citizenry the ability to protect its privacy. This is unsettling because in a democracy, it is possible for bad people to occasionally get elected—sometimes very bad people. Normally, a well-functioning democracy has ways to remove these people from power. But the wrong technology infrastructure could allow such a future government to watch every move anyone makes to oppose it. It could very well be the last government we ever elect.

When making public policy decisions about new technologies for the government, I think one should ask oneself which technologies would best strengthen the hand of a police state. Then, do not allow the government to deploy those technologies. This is simply a matter of good civic hygiene.

Cryptogram Reimagining Democracy

Imagine that all of us—all of society—have landed on some alien planet and need to form a government: clean slate. We do not have any legacy systems from the United States or any other country. We do not have any special or unique interests to perturb our thinking. How would we govern ourselves? It is unlikely that we would use the systems we have today. Modern representative democracy was the best form of government that eighteenth-century technology could invent. The twenty-first century is very different: scientifically, technically, and philosophically. For example, eighteenth-century democracy was designed under the assumption that travel and communications were both hard.

Indeed, the very idea of representative government was a hack to get around technological limitations. Voting is easier now. Does it still make sense for all of us living in the same place to organize every few years and choose one of us to go to a single big room far away and make laws in our name? Representative districts are organized around geography because that was the only way that made sense two hundred-plus years ago. But we do not need to do it that way anymore. We could organize representation by age: one representative for the thirty-year-olds, another for the forty-year-olds, and so on. We could organize representation randomly: by birthday, perhaps. We can organize in any way we want. American citizens currently elect people to federal posts for terms ranging from two to six years. Would ten years be better for some posts? Would ten days be better for others? There are lots of possibilities. Maybe we can make more use of direct democracy by way of plebiscites. Certainly we do not want all of us, individually, to vote on every amendment to every bill, but what is the optimal balance between votes made in our name and ballot initiatives that we all vote on?

For the past three years, I have organized a series of annual two-day workshops to discuss these and other such questions.1 For each event, I brought together fifty people from around the world: political scientists, economists, law professors, experts in artificial intelligence, activists, government types, historians, science-fiction writers, and more. We did not come up with any answers to our questions—and I would have been surprised if we had—but several themes emerged from the event. Misinformation and propaganda was a theme, of course, and the inability to engage in rational policy discussions when we cannot agree on facts. The deleterious effects of optimizing a political system for economic outcomes was another theme. Given the ability to start over, would anyone design a system of government for the near-term financial interest of the wealthiest few? Another theme was capitalism and how it is or is not intertwined with democracy. While the modern market economy made a lot of sense in the industrial age, it is starting to fray in the information age. What comes after capitalism, and how will it affect the way we govern ourselves?

Many participants examined the effects of technology, especially artificial intelligence (AI). We looked at whether—and when—we might be comfortable ceding power to an AI system. Sometimes deciding is easy. I am happy for an AI system to figure out the optimal timing of traffic lights to ensure the smoothest flow of cars through my city. When will we be able to say the same thing about the setting of interest rates? Or taxation? How would we feel about an AI device in our pocket that voted in our name, thousands of times per day, based on preferences that it inferred from our actions? Or how would we feel if an AI system could determine optimal policy solutions that balanced every voter’s preferences: Would it still make sense to have a legislature and representatives? Possibly we should vote directly for ideas and goals instead, and then leave the details to the computers.

These conversations became more pointed in the second and third years of our workshop, after generative AI exploded onto the internet. Large language models are poised to write laws, enforce both laws and regulations, act as lawyers and judges, and plan political strategy. How this capacity will compare to human expertise and capability is still unclear, but the technology is changing quickly and dramatically. We will not have AI legislators anytime soon, but just as today we accept that all political speeches are professionally written by speechwriters, will we accept that future political speeches will all be written by AI devices? Will legislators accept AI-written legislation, especially when that legislation includes a level of detail that human-based legislation generally does not? And if so, how will that change affect the balance of power between the legislature and the administrative state? Most interestingly, what happens when the AI tools we use to both write and enforce laws start to suggest policy options that are beyond human understanding? Will we accept them, because they work? Or will we reject a system of governance where humans are only nominally in charge?

Scale was another theme of the workshops. The size of modern governments reflects the technology at the time of their founding. European countries and the early American states are a particular size because that was a governable size in the eighteenth and nineteenth centuries. Larger governments—those of the United States as a whole and of the European Union—reflect a world where travel and communications are easier. Today, though, the problems we have are either local, at the scale of cities and towns, or global. Do we really have need for a political unit the size of France or Virginia? Or is it a mixture of scales that we really need, one that moves effectively between the local and the global?

As to other forms of democracy, we discussed one from history and another made possible by today’s technology. Sortition is a system of choosing political officials randomly. We use it today when we pick juries, but both the ancient Greeks and some cities in Renaissance Italy used it to select major political officials. Today, several countries—largely in Europe—are using the process to decide policy on complex issues. We might randomly choose a few hundred people, representative of the population, to spend a few weeks being briefed by experts, debating the issues, and then deciding on environmental regulations, or a budget, or pretty much anything.

“Liquid democracy” is a way of doing away with elections altogether. The idea is that everyone has a vote and can assign it to anyone they choose. A representative collects the proxies assigned to him or her and can either vote directly on the issues or assign all the proxies to someone else. Perhaps proxies could be divided: this person for economic matters, another for health matters, a third for national defense, and so on. In the purer forms of this system, people might transfer their votes to someone else at any time. There would be no more election days: vote counts might change every day.

And then, there is the question of participation and, more generally, whose interests are taken into account. Early democracies were really not democracies at all; they limited participation by gender, race, and land ownership. These days, to achieve a more comprehensive electorate we could lower the voting age. But, of course, even children too young to vote have rights, and in some cases so do other species. Should future generations be given a “voice,” whatever that means? What about nonhumans, or whole ecosystems? Should everyone have the same volume and type of voice? Right now, in the United States, the very wealthy have much more influence than others do. Should we encode that superiority explicitly? Perhaps younger people should have a more powerful vote than everyone else. Or maybe older people should.

In the workshops, those questions led to others about the limits of democracy. All democracies have boundaries limiting what the majority can decide. We are not allowed to vote Common Knowledge out of existence, for example, but can generally regulate speech to some degree. We cannot vote, in an election, to jail someone, but we can craft laws that make a particular action illegal. We all have the right to certain things that cannot be taken away from us. In the community of our future, what should be our rights as individuals? What should be the rights of society, superseding those of individuals?

Personally, I was most interested, at each of the three workshops, in how political systems fail. As a security technologist, I study how complex systems are subverted—hacked, in my parlance—for the benefit of a few at the expense of the many. Think of tax loopholes, or tricks to avoid government regulation. These hacks are common today, and AI tools will make them easier to find—and even to design—in the future. I would want any government system to be resistant to trickery. Or, to put it another way: I want the interests of each individual to align with the interests of the group at every level. We have never had a system of government with this property, but—in a time of existential risks such as climate change—it is important that we develop one.

Would this new system of government even be called “democracy”? I truly do not know.

Such speculation is not practical, of course, but still is valuable. Our workshops did not produce final answers and were not intended to do so. Our discourse was filled with suggestions about how to patch our political system where it is fraying. People regularly debate changes to the US Electoral College, or the process of determining voting districts, or the setting of term limits. But those are incremental changes. It is difficult to find people who are thinking more radically: looking beyond the horizon—not at what is possible today but at what may be possible eventually. Thinking incrementally is critically important, but it is also myopic. It represents a hill-climbing strategy of continuous but quite limited improvements. We also need to think about discontinuous changes that we cannot easily get to from here; otherwise, we may be forever stuck at local maxima. And while true innovation in politics is a lot harder than innovation in technology, especially without a violent revolution forcing changes on us, it is something that we as a species are going to have to get good at, one way or another.

Our workshop will reconvene for a fourth meeting in December 2025.

Note

  1. The First International Workshop on Reimagining Democracy (IWORD) was held December 7—8, 2022. The Second IWORD was held December 12—13, 2023. Both took place at the Harvard Kennedy School. The sponsors were the Ford Foundation, the Knight Foundation, and the Ash and Belfer Centers of the Kennedy School. See Schneier, “Recreating Democracy” and Schneier, “Second Interdisciplinary Workshop.”

This essay was originally published in Common Knowledge.

Worse Than FailureA Single Mortgage

We talked about singletons a bit last week. That reminded John of a story from the long ago dark ages where we didn't have always accessible mobile Internet access.

At the time, John worked for a bank. The bank, as all banks do, wanted to sell mortgages. This often meant sending an agent out to meet with customers face to face, and those agents needed to show the customer what their future would look like with that mortgage- payment calculations, and pretty little graphs about equity and interest.

Today, this would be a simple website, but again, reliable Internet access wasn't a thing. So they built a client side application. They tested the heck out of it, and it worked well. Sales agents were happy. Customers were happy. The bank itself was happy.

Time passed, as it has a way of doing, and the agents started clamoring for a mobile web version, that they could use on their phones. Now, the first thought was, "Wire it up to the backend!" but the backend they had was a mainframe, and there was a dearth of mainframe developers. And while the mainframe was the source of truth, and the one place where mortgages actually lived, building a mortgage calculator that could do pretty visualizations was far easier- and they already had one.

The client app was in .NET, and it was easy enough to wrap the mortgage calculation objects up in a web service. A quick round of testing of the service proved that it worked just as well as the old client app, and everyone was happy - for awhile.

Sometimes, agents would run a calculation and get absolute absurd results. Developers, putting in exactly the same values into their test environment wouldn't see the bad output. Testing the errors in production didn't help either- it usually worked just fine. There was a Heisenbug, but how could a simple math calculation that had already been tested and used for years have a Heisenbug?

Well, the calculation ran by simulation- it simply iteratively applied payments and interest to generate the entire history of the loan. And as it turns out, because the client application which started this whole thing only ever needed one instance of the calculator, someone had made it a singleton. And in their web environment, this singleton wasn't scoped to a single request, it was a true global object, which meant when simultaneous requests were getting processed, they'd step on each other and throw off the iteration. And testing didn't find it right away, because none of their tests were simulating the effect of multiple simultaneous users.

The fix was simple- stop being a singleton, and ensure every request got its own instance. But it's also a good example of misapplication of patterns- there was no need in the client app to enforce uniqueness via the singleton pattern. A calculator that holds state probably shouldn't be a singleton in the first place.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsFollow That

Author: Julian Miles, Staff Writer Slow night on the back side of the club quarter. Shouldn’t have taken the bet, but two bottles of wine and Ronny being a tit decided otherwise. So here I am, looking to beat his takings from the main drag, watching the only possible passenger in the last hour climb […]

The post Follow That appeared first on 365tomorrows.

Cryptogram China Sort of Admits to Being Behind Volt Typhoon

The Wall Street Journal has the story:

Chinese officials acknowledged in a secret December meeting that Beijing was behind a widespread series of alarming cyberattacks on U.S. infrastructure, according to people familiar with the matter, underscoring how hostilities between the two superpowers are continuing to escalate.

The Chinese delegation linked years of intrusions into computer networks at U.S. ports, water utilities, airports and other targets, to increasing U.S. policy support for Taiwan, the people, who declined to be named, said.

The admission wasn’t explicit:

The Chinese official’s remarks at the December meeting were indirect and somewhat ambiguous, but most of the American delegation in the room interpreted it as a tacit admission and a warning to the U.S. about Taiwan, a former U.S. official familiar with the meeting said.

No surprise.

,

Cory DoctorowNimby and the D-Hoppers CONCLUSION

Ben Templesmith's art for the comic adaptation of 'Nimby and the D-Hoppers', depicting a figure in powered armor flying through a slate-gray sky filled with abstract equations.

This week on my podcast, I conclude my reading of my 2003 Asimov’s Science Fiction Magazine story, Nimby and the D-Hoppers” (here’s the first half). The story has been widely reprinted (it was first published online in The Infinite Matrix in 2008), and was translated (by Elisabeth Vonarburg) into French for Solaris Magazine, as well as into Chinese, Russian, Hebrew, and Italian. The story was adapted for my IDW comic book series Cory Doctorow’s Futuristic Tales of the Here and Now by Ben Templesmith. I read this into my podcast 20 years ago, but I found myself wanting to revisit it.

Don’t get me wrong — I like unspoiled wilderness. I like my sky clear and blue and my city free of the thunder of cars and jackhammers. I’m no technocrat. But goddamit, who wouldn’t want a fully automatic, laser-guided, armor-piercing, self-replenishing personal sidearm?

Nice turn of phrase, huh? I finally memorized it one night, from one of the hoppers, as he stood in my bedroom, pointing his hand-cannon at another hopper, enumerating its many charms: “This is a laser-guided blah blah blah. Throw down your arms and lace your fingers behind your head, blah blah blah.” I’d heard the same dialog nearly every day that month, whenever the dimension-hoppers catapaulted into my home, shot it up, smashed my window, dived into the street, and chased one another through my poor little shtetl, wreaking havoc, maiming bystanders, and then gateing out to another poor dimension to carry on there.

Assholes.

It was all I could do to keep my house well-fed on sand to replace the windows. Much more hopper invasion and I was going to have to extrude its legs and babayaga to the beach. Why the hell was it always my house, anyway?


MP3

365 TomorrowsThe Final Sunset

Author: Lachlan Bond I watch on, as the sun begins to expand before my eyes. Slowly, at first, its pulsating shape growing ever-so-slightly behind the Vintusian glass. The radiation waves shake the station, solar winds battering our rapidly failing shields. Alarms blare, but I can hardly hear them over the slip disks firing at full […]

The post The Final Sunset appeared first on 365tomorrows.

,

365 TomorrowsX Wings

Author: David C. Nutt I did a quick scan outside my vehicle. I could see columns of thousands upon thousands of them, spinning fast, trying to ride the thermals up and out of the dust devil. At least half of them are getting shredded by the wind and when pieces of their wings and thoraxes […]

The post X Wings appeared first on 365tomorrows.

David BrinTwenty things about The Mess that no one has mentioned, so far. From tariffs to signalgate to DOGE to Houthis and...

I've been distracted by local vexations. But truly, it's time to weigh back into this era in America, that Robert Heinlein accurately forecast as The Crazy Years

This is a long one. But here are things you ought to know, that you've not seen in the news. Starting with...

 == The Tariff War ==

* Ninety-five years ago a Republican Congress passed the biggest tariff hike til Trump. Which even Republican economists now call the dumbest policy move ever. The thing that swerved a mere stock market downturn into the Great Depression. 

Want it explained in a way that's painfully funny? But sure. Some of you already knew that. 

Only have you considered how Trump's tariffs on China give Xi and the Beijing Politburo exactly what they want?

China's economy has been in decline for two years, with bad prospects ahead, especially for youth unemployment. The Tariff War will worsen things for millions of Chinese... but not for Xi!  What Xi gets out of this - personally - is someone to blame for China's already-underway recession! 

"It's all America's fault!"  Thus solidifying his own continued grip on power. He was already doing that, riling (unearned) anti-American fever. Only Trump now proves his case! While ensuring that the USA swirls down the same economic toilet. 

Always look at who benefits! So far it sure looks like Putin and Xi.

* Oh, and this... Want the biggest reason China's economy was already in decline? No one in media is touting that the USA was already experiencing a renaissance of manufacturing. 

HOW is that not a major part of the story? As a direct result of the 2021 Pelosi bills, investment in U.S. domestic factories skyrocketed!**   Shortening supply chains, reducing use of toxic ocean freighters and dependence on China. Only now, if it continues, Trump will claim "My tariffs did it!"

Because Democrats have the polemical skills of a tardigrade.

(Oh, and the USA has been energy independent and a net exporter of oil etc. since Obama. Care to bet? "Drill baby drill?" The Saudis are already scared. They won't allow it.)


== Want another aspect you hadn't considered? ==

* This nutty Tariff War will end the status of the US dollar as the world reserve currency

It is fast underway, as we speak. Long a fond goal of the Chinese politburo, along with Putin and the Saudis. All of them our close friends. Though even our actual allies won't trust the dollar anymore.

Take a look at what's happening with the dollar... and be proud. So proud.


== Might a Tariff Gambit have been done better?

By effectively banning Chinese imports to the U.S., Trump is sending commerce into chaos and prices skyward. Still, while it's totally dumb and risky, I suppose that a carefully selective tarrif tiff might have worked.  

If the aim was to replace China as our top supplier, it could have been done tactically, by favoring friendlier cheap-labor nations like Vietnam and Malaysia.  First, that would keep supply chains going. It would also help to solidify those nations as allies in opposing Beijing ambitions southward. And give US businesses a way to transition more gently.

Better yet... Mexico. 

While our southern neighbor has been fast transforming into a middle class nation - largely thanks to trade with the U.S. - it's still pretty cheap labor. Only with many added bonuses. First, as Mexicans become more prosperous, they buy a lot of stuff from the USA. Three dollars out of every ten that we spend on Mexican value-added goods - say in maquiladora parts-assembly plants - comes right back. 

Second, U.S. manufacturers will tell you they invest more in US factories when they can partner with Mexican ones. (Shall we wager over that assertion?)

(*Elsewhere I explain why turning Mexico middle class has been one of America's greatest accomplishments! I can prove it. For example, during all of the Right's ravings about illegal immigration, do you hear that it's Mexicans flooding the border? No you don't. The fraction of border-crashers who are Mexican citizens has been in steep decline for a decade. Increasingly, it's refugees from Central American right-wing caudillo regimes and so on. Because most Mexicans can now find decent work at home. Try actually tracking such things for yourself.)

Want another selective tariff that could have advanced U.S. interests? 

How about we use tariffs to pressure countries to stop buying Russian oil, till Putin withdraws from Ukraine? And punish countries like Panama and Gabon and all the other 'flags of convenience' till they stop shilling for Russian tankers that are evading sanctions and rusting into ecological time bombs? More generally, notice that Trump made no actual demands!

Instead? Notice that Russia was left out of any mention on Trumps list of a zillion super-tariffed nations. Why would that be?

Because while Don howls words occasionally toward Putin -- purely for show -- he never ever ever ever does any actual acts that negatively affect his Kremlin master. 

Ever.


== The distilled frothy essence atop the poop pile ==

I could go on about the Trump Tariff War. But of course it boils down to two things that should be plain to anyone. 

First: it's primarily the USA and our friends who will get harmed, but never our enemies...

... and second: it's all jibbering lunacy! Perp'd by a clown car of capering idiots. 

Dig it: during Trump v.1.0 (#45) he at least appointed a veneer of adults to top positions... then got pissed off when ALL of those adults (Tillerson, McMaster, Barr etc.) turned on him, denouncing him as a raving moron and Kremlin agent. 

The one absolute fact about Trump v 2.0 (#47) is the total absence of any adults in the room. 

None. Anywhere. Not one appointee who is actually suited or qualified for the job. Just toadies. Lickspittle and (my personal theory) blackmailed to ensure utter loyalty. See the pattern. Because no one in media - not even a single liberal pundit - is tracking the obvious.


== A few more things to notice ==

I was gonna do all the tariff stuff as 'blips' but there was no way to condense those complicated aspects. So, here are a few items that may qualify as 'blips':

SIGNAL GATE. Remember the Signal Idiocy? 

("Oh, Brin, that was so 'last week.'" Yeah yeah. Libs never notice that a core KGB/MAGA tactic is to distract from one scandal by moving on to the next one!)

Sure, liberal and moderate media did an okay job blaring at some aspects of the insipidly dumb "Signalgate" Scandal, wherein a dozen top Trumpian cabinet officials illegally used an non-secure, unvetted chat system to giggle-blather top secret information about a looming US military action against the Yemeni Houthi rebel enclave... while one member of the chat was even in a notoriously non-secure Moscow hotel, at that very moment! 

And of course, everyone in media shouted that our National Security Adviser invited a top-level critic/reporter into the chat without vetting nor any participant (not even the Director of National 'Intelligence') even noticing.   

Still and alas, as always, no one in either moderate or liberal media commented that:

1. Not a single military officer was part of the conversation about a major military operation. Sure, it's consistent with the all-out Republican campaign against the entire senior officer corps. The men and women who have been most dedicated to service and fact-centered responsibility. And competence! 

No, no, we can't have anyone like that in a conversation about a military operation. Some common sense might leak in and pollute the purity of blithering idiocy.

2. Um... but... WHY attack the Houthis? I am sure someone in media must have asked that, but I never saw it.  Indeed, the dumbitude aspects of "Signalgate" perfectly distracted from that bigger question.

Think about it. The Houthis are at war with the Saudis. And every Republican administration, without exception, does the bidding of the Saudi Royal House instantly and without question. Bush Sr., Bush Jr. and Trump. Good doggies.

But are WE served by enraging potential new bin Ladens into swearing revenge on America?

Sure, the Houthis are Shiite friends of Iran... and Iran is best buds with Putin. But then so are the Saudis... and so is Trump. Confused yet? Then look for the common thread. 

What they ALL want - and seek from Trump - is the fall of American liberal democracy. And any sign of a world governed by a transparent Rule of Law. 

Distilled in its essence, the Houthis are the least culpable party in any of this. All they want is to be left alone. Oh, their current leadership is likely a pack of radical jerks. But if a fair referendum were held, 90%+ of the population of North Yemen would vote for independence and peace and membership in the family of nations. 

Above all, do we really need more enemies, right now?


== And WHY the DOGE mass firings? ==

Ay Carumba!  I've seen NO ONE in media or politics even try to penetrate this question!

Dig it: The oligarchy and their foreign backers don't give a damn about the Department of Education. Or massive firings at the VA or Social Security, or Commerce or Agriculture or Parks and so on. There's no underlying goal of "efficiency." 

If they wanted that, they could have done it the way that Al Gore did with his massively effective efficiency campaign in the 90s.

No. All the slashed civil servants I just mentioned, and thousands more were attacked and their services to citizens cauterized for one reason. The same old reason that liberal and moderate pundits always fall-for. Distraction from the real targets.

1. I mentioned the United States senior military officer corps. The smart and savvy and dedicated generals and admirals must be brought to heel!

2. The FBI and all related law professionals. Um duh? But above all...

3. The Internal Revenue Service. The greatest accomplishment of the 2021 Pelosi bills was to fully fund the IRS, which had been starved by GOP Congresses into using 1970s computers and crippled for lack of personnel from auditing the super-rich tax cheaters... 

...cheaters who were left terrified by that legislation!  And I assert that the topmost reason they pushed hard for Donald Trump was to get this opportunity to re-gut the IRS.

And failure to even note or mention that aspect only proves my point about the moderate and liberal and Democrat political and punditry caste.  Oh, many of them are decent people, with far lower rates of every turpitude than their corrupt, perverted and blackmailed GOP counterparts...

...but smart? 

Tardigrades. Tardigrades all the way down.


== There's so much more... ==

...but no time or room for any more this weekend. 

Oh I am sure that some of the items I cite above were mistaken or cockeyed. But suffice it to say that those in media who failed to note any of them are proving to be almost as incompetent and blind as the fools who ran the Harris campaign and put us into this mess.

But YOU don't have to be blind!  

You can spread word about some of these things. Get others to see what has NOT been spoon-fed to them by simplistic media. And if you get any of the complicit dopes to put up wager stakes, I'll happily provide ways and means to take their money.

Heinlein had it right.  The way to ultimately emerge from The Crazy Years is to spread sanity.


----------------------

----------------------

**The idiocy of Kamala Harris's advisers, for not emphasizing the return of U.S. manufacturing instead of just shouting "Abortion!" ten million times - disqualifies them from any future role in Democratic politics. Kamala herself was fine. Her political mavens should have no future role.


,

Worse Than FailureError'd: Sentinel Headline

When faced with an information system lacking sufficient richness to permit its users to express all of the necessary data states, human beings will innovate. In other words, they will find creative ways to bend the system to their will, usually (but not always) inconsequentially.

In the early days of information systems, even before electronic computers, we found users choosing to insert various out-of-bounds values into data fields to represent states such as "I don't know the true value for this item" or "It is impossible accurately state the true value of this item because of faulty constraint being applied to the input mechanism" or other such notions.

This practice carried on into the computing age, so that now, numeric fields will often contain values of 9999 or 99999999. Taxpayer numbers will be listed as 000-00-0000 or any other repetition of the same digit or simple sequences. Requirements to enter names collected John Does. Now we also see a fair share of Disney characters.

Programmers then try to make their systems idiot-proof, with the obvious and entirely predictable results.

The mere fact that these inventions exist at all is entirely due to the ommission of mechanisms for the metacommentary that we all know perfectly well is sometimes necessary. But rather than provide those, it's easier to wave our hands and pretend that these unwanted states won't exist, can be ignored, can be glossed over. "Relax" they'll tell you. "It probably won't ever happen." "If it does happen, it won't matter." "Don't lose your head over it."

The Beast in Black certainly isn't inclined to cover up an errant sentinel. "For that price, it had better be a genuine Louis XVI pillow from 21-January-1793." A La Lanterne!

3

 

Daniel D. doubled up on Error'ds for us. "Do you need the error details? Yes, please."

0

 

And again with an alert notification oopsie. "Google Analytics 4 never stops surprising us any given day with how bugged it is. I call it an "Exclamation point undefined". You want more info? Just Google it... Oh wait." I do appreciate knowing who is responsible for the various bodges we are sent. Thank you, Daniel.

1

 

"Dark pattern or dumb pattern?" wonders an anonymous reader. I don't think it's very dark.

2

 

Finally, Ian Campbell found a data error that doesn't look like an intentional sentinel. But I'm not sure what this number represents. It is not an integral power of 2. Says Ian, "SendGrid has a pretty good free plan now with a daily limit of nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-two."

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsAutovore

Author: Morrow Brady Without a backstory, the darker patch at the edge of the busy road went unnoticed. It was being faded to oblivion by layers of desert dust and the enraged rush hour traffic. As my evening walk took me past that patch, near the busy street junction, I looked over at it and […]

The post Autovore appeared first on 365tomorrows.