Planet Russell

,

365 TomorrowsLiberating Homer

Author: Laura Jarosz “Whaddya mean, gone? Like, dead?” Dante shrugged. “The safehouse was totally empty. Door hanging open, no Homer inside. No stories, either.” I pressed my hand against my pocket and felt the reassuring crinkle of paper. At least I still had last week’s story. As I walked numbly away, I let my eyes […]

The post Liberating Homer appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: Idtoic Mistakes

Working at a company where the leadership started as technical people has its advantages, but it can also carry costs. Arthur is in one such environment, and while it means that management and labor have a common vocabulary, the company leadership forgets that they're not in a technical role anymore. So they still like to commit code to the project. And that's how things like this happen:

if( this.idtoservice != null )
{
     sOwner = this.idtoservice.Common.Security.Owner;
}
else if( this.idtoservice != null )
{
     sOwner = this.idtoservice.Common.Security.Owner;
}
else if( this.idtoservice != null )
{
     sOwner = this.idtoservice.Common.Security.Owner;
}

This isn't one commit from the CEO, it's 4 different commits. It seems like the CEO, perhaps, doesn't understand merge conflicts?

This particular bit of bad code is at least absolutely harmless and likely gets compiled out, but it doesn't mean that Arthur doesn't feel the urge to drink every time his CEO makes a new commit.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsEnergy Credits

Author: Bridger Cummings Scanning the reels of family videos gave LF495 some odd sensation of warmth. Was it like eating? LF495 didn’t eat, but it did need power. It was connected to a multi-layer variate array of servers across the entire planet. It didn’t really matter where you were because one was everywhere. Earth had […]

The post Energy Credits appeared first on 365tomorrows.

Planet DebianGunnar Wolf: Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I’m not really that much of a Python guy… But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it’s the language I find my students can best cope with. But delivering a talk to ChiPy?

On the other hand, I have long used a very simplistic and limited filesystem I’ve designed as an implementation project at class: FIUnamFS (for “Facultad de Ingeniería, Universidad Nacional Autónoma de México�: the Engineering Faculty for Mexico’s National University, where I teach. Sorry, the link is in Spanish — but you will find several implementations of it from the students 😉). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk.

As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE.

Python FUSE

But, in the six semesters I’ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation.

Maybe this is because it’s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I’ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros’ passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros’, and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4… But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?).

And of course, there isn’t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy… Where to start from?

…So I setup to try and help.

Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but… maybe my documentation ends up obfuscating the intent? I hope not — and, read on, I’ve provided some remediation).

I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS).

So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem.

I think providing fun or useful examples is also a good way to get students to use what I’m teaching, so I’ve added some ideas I’ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem.

They all provide something that could be seen as useful, in a way that’s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! 😉

So… I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time.

Of course, I will also share this project with my students in the next couple of weeks… And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report!

Planet DebianFreexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson

Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo Röhling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck.

Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne

Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed.

As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0.

Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection.

This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne

As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include:

Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera

Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python.

During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake).

A new feature in Python 3.12, was to have unittest’s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn’t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this.

As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions

  • Carles prepared the update of python-pyaarlo package to a new upstream release.

  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.

  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.

  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.

  • Colin upgraded the OpenSSH packaging to 9.9p1.

  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.

  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.

  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org

  • Raphaël did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the “too-many-contacts” lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.

  • Raphaël uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.

  • Helmut sent seven patches and sponsored one upload for cross build failures.

  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.

  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.

  • Helmut sent a patch fixing coinstallability of gcc-defaults.

  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We’re moving into the boring long-tail of the transition.

  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.

  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.

  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu.

    Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.

  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).

  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.

  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.

  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

,

Planet DebianBen Hutchings: FOSS activity in September 2024

Krebs on SecurityLamborghini Carjackers Lured by $243M Cyberheist

The parents of a 19-year-old Connecticut honors student accused of taking part in a $243 million cryptocurrency heist in August were carjacked a week later — while out house-hunting in a brand new Lamborghini. Prosecutors say the couple was beaten and briefly kidnapped by six young men who traveled from Florida as part of a botched plan to hold the parents for ransom.

Image: ABC7NY.  youtube.com/watch?v=xoiaGzwrunY

Late in the afternoon of Aug. 25, 2024 in Danbury, Ct., a married couple in their 50s pulled up to a gated community in a new Lamborghini Urus (investigators say the sports car still had temporary tags) when they were intentionally rear-ended by a Honda Civic.

A witness told police they saw three men exit a van that was following the Honda, and said the men began assaulting the couple and forcing them into the van. Local police officers spotted the van speeding from the scene and pursued it, only to find the vehicle crashed and abandoned a short distance away.

Inside the disabled van the police found the couple with their hands and feet bound in duct tape, the man visibly bruised after being assaulted with a baseball bat. Danbury police soon reported arresting six suspects in the kidnapping, all men aged 18-26 from Florida. They also recovered the abandoned Lamborghini from a wooded area.

A criminal complaint (PDF) filed on Sept. 24 against the six men does not name the victims, referring to them only as a married couple from Danbury with the initials R.C. and S.C. But prosecutors in Connecticut said they were targeted “because the co-conspirators believed the victims’ son had access to significant amounts of digital currency.”

What made the Miami men so convinced R.C. and S.C.’s son was loaded with cryptocurrency? Approximately one week earlier, on Aug. 19, a group of cybercriminals that allegedly included the couple’s son executed a sophisticated phone-based social engineering attack in which they stole $243 million worth of cryptocurrency from a victim in Washington, D.C.

That’s according to ZachXBT, a frequently cited crypto crime investigator who published a lengthy thread that broke down how the theft was carried out and ultimately exposed by the perpetrators themselves.

ZachXBT’s post included a screen recording of a Discord chat session made by one of the participants to the $243 million robbery, noting that two of the people involved managed to leak the username of the Microsoft Windows PCs they were using to participate in the chat.

One of the usernames leaked during the chat was Veer Chetal. According to ZachXBT, that name corresponds to a 19-year-old from Danbury who allegedly goes by the nickname “Wiz,” although in the leaked video footage he allegedly used the handle “Swag.”  Swag was reportedly involved in executing the early stages of the crypto heist — gaining access to the victim’s Gmail and iCloud accounts.

A still shot from a video screenshare in which one of the participants on the Discord voice chat used the Windows username Veer Chetal. Image: x.com/zachxbt

The same day ZachXBT published his findings, a criminal indictment was issued in Washington D.C. charging two of the men he named as involved in the heist. Prosecutors allege Malone “Greavys” Lam, 20, of Miami and Los Angeles, and Jeandiel “Box” Serrano, 21, of Los Angeles conspired to steal and launder over $230 million in cryptocurrency from a victim in Washington, D.C. The indictment alleges Lam and Serrano were helped by other unnamed co-conspirators.

“Lam and Serrano then allegedly spent the laundered cryptocurrency proceeds on international travel, nightclubs, luxury automobiles, watches, jewelry, designer handbags, and rental homes in Los Angeles and Miami,” reads a press release from the U.S. Department of Justice.

By tracing the flow of funds stolen in the heist, ZachXBT concluded that Wiz received a large percentage from the theft, noting that “additional comfort [in naming him as involved] was gained as throughout multiple recordings accomplices refer to him as ‘Veer’ on audio and in chats.”

“A cluster of [cryptocurrency] addresses tied to both Box/Wiz received $41M+ from two exchanges over the past few weeks primarily flowing to luxury goods brokers to purchase cars, watches, jewelry, and designer clothes,” ZachXBT wrote.

KrebsOnSecurity sought comment from Veer Chetal, and from his parents — Radhika Chetal and Suchil Chetal. This story will be updated in the event that anyone representing the Chetal family responds. Veer Chetal has not been publicly charged with any crime.

According to a news brief published by a private Catholic high school in Danbury that Veer Chetal attended, in 2022 he successfully completed Harvard’s Future Lawyers Program, a “unique pre-professional program where students, guided by qualified Harvard undergraduate instructors, learn how to read and build a case, how to write position papers, and how to navigate a path to law school.” A November 2022 story at patch.com quoted Veer Chetal (class of 2024) crediting the Harvard program with his decision to pursue a career in law.

It remains unclear which Chetal family member acquired the 2023 Lamborghini Urus, which has a starting price of around $233,000. Sushil Chetal’s LinkedIn profile says he is a vice president at the investment bank Morgan Stanley.

It is clear that other alleged co-conspirators to the $243 million heist displayed a conspicuous consumption of wealth following the date of the heist. ZachXBT’s post chronicled Malone’s flashy lifestyle, in which he allegedly used the stolen money to purchase more than 10 vehicles, rent palatial properties, travel with friends on chartered jets, and spend between $250,000 and $500,000 a night at clubs in Los Angeles and Miami.

In the photo on the bottom right, Greavys/Lam is the individual on the left wearing shades. They are pictured leaving a luxury goods store. Image: x.com/zachxbt

WSVN-TV in Miami covered an FBI raid of a large rented waterfront home around the time Malone and Serrano were arrested. The news station interviewed a neighbor of the home’s occupants, who reported a recent large party at the residence wherein the street was lined with high-end luxury vehicles — all of them with temporary paper tags.

ZachXBT unearthed a video showing a person identified as Wiz at a Miami nightclub earlier this year, wherein they could be seen dancing to the crowd’s chants while holding an illuminated sign with the message, “I win it all.”

It appears that all of the suspects in the cyber heist (and at least some of the alleged carjackers) are members of The Com, an archipelago of crime-focused chat communities which collectively functions as a kind of distributed cybercriminal social network that facilitates instant collaboration.

As documented in last month’s deep dive on top Com members,  The Com is also a place where cybercriminals go to boast about their exploits and standing within the community, or to knock others down a peg or two. Prominent Com members are endlessly sniping over who pulled off the most impressive heists, or who has accumulated the biggest pile of stolen virtual currencies.

And as often as they extort and rob victims for financial gain, members of The Com are trying to wrest stolen money from their cybercriminal rivals — often in ways that spill over into physical violence in the real world.

One of the six Miami-area men arrested in the carjacking and extortion plot gone awry — Reynaldo “Rey” Diaz — was shot twice while parked in his bright yellow Corvette in Miami’s design district in 2022. In an interview with a local NBC television station, Diaz said he was probably targeted for the jewelry he was wearing, which he described as “pretty expensive.”

KrebsOnSecurity has learned Diaz also went by the alias “Pantic” on Telegram chat channels dedicated to stealing cryptocurrencies. Pantic was known for participating in several much smaller cyber heists in the past, and spending most of his cut on designer clothes and jewelry.

The Corvette that Diaz was sitting in when he was shot in 2022. Image: NBC 6, South Florida.

Earlier this year, Diaz was “doxed,” or publicly outed as Pantic, with his personal and family information posted on a harassment and extortion channel frequented by members of The Com. The reason cited for Pantic’s doxing was widely corroborated by multiple Com members: Pantic had inexplicably robbed two close friends at gunpoint, one of whom recently died of a drug overdose.

Government prosecutors say the brazen daylight carjacking was paid for and organized by 23-year-old Miami resident Angel “Chi Chi” Borrero. In 2022, Borrero was arrested in Miami for aggravated assault with a deadly weapon.

The six Miami men face charges including first-degree assault, kidnapping and reckless endangerment, and five of them are being held on a $1 million bond. One suspect is also charged with reckless driving, engaging police in pursuit and evading responsibility; his bond was set at $2 million. Lam and Serrano are each charged with conspiracy to commit wire fraud and conspiracy to launder money.

Cybercriminals hail from all walks of life and income levels, but some of the more accomplished cryptocurrency thieves also tend to be among the more privileged, and from relatively well-off families. In other words, these individuals aren’t stealing to put food on the table: They’re doing it so they can amass all the trappings of instant wealth, and so they can boast about their crimes to others on The Com.

There is also a penchant among this crowd to call attention to their activities in conspicuous ways that hasten their arrest and criminal charging. In many ways, the story arc of the young men allegedly involved in the $243 million heist tracks closely to that of Joel Ortiz, a valedictorian who was sentenced in 2019 to 10 years in prison for stealing more than $5 million in cryptocurrencies.

Ortiz famously posted videos of himself and co-conspirators chartering flights and partying it up at LA nightclubs, with scantily clad women waving giant placards bearing their “OG” usernames — highly-prized, single-letter social media accounts that they’d stolen or purchased stolen from others.

Ortiz earned the distinction of being the first person convicted of SIM-swapping, a crime that involves using mobile phone company insiders or compromised employee accounts to transfer a target’s phone number to a mobile device controlled by the attackers. From there, the attacker can intercept any password reset links, and any one-time passcodes sent via SMS or automated voice calls.

But as the mobile carriers seek to make their networks less hospitable to SIM-swappers, and as more financial platforms seek to harden user account security, today’s crypto thieves are finding they don’t need SIM-swaps to steal obscene amounts of cryptocurrency. Not when tricking people over the phone remains such an effective approach.

According to ZachXBT, the crooks responsible for the $243 million theft initially compromised the target’s personal accounts after calling them as Google Support and using a spoofed number. The attackers also spoofed a call from account support representatives at the cryptocurrency exchange Gemini, claiming the target’s account had been hacked.

From there the target was social engineered over the phone into resetting multi-factor authentication and sending Gemini funds to a compromised wallet. ZachXBT says the attackers also convinced the victim to use AnyDesk to share their screen, and in doing so the victim leaked their private keys.

Cryptogram IronNet Has Shut Down

After retiring in 2014 from an uncharacteristically long tenure running the NSA (and US CyberCommand), Keith Alexander founded a cybersecurity company called IronNet. At the time, he claimed that it was based on IP he developed on his own time while still in the military. That always troubled me. Whatever ideas he had, they were developed on public time using public resources: he shouldn’t have been able to leave military service with them in his back pocket.

In any case, it was never clear what those ideas were. IronNet never seemed to have any special technology going for it. Near as I could tell, its success was entirely based on Alexander’s name.

Turns out there was nothing there. After some crazy VC investments and an IPO with a $3 billion “unicorn” valuation, the company has shut its doors. It went bankrupt a year ago—ceasing operations and firing everybody—and reemerged as a private company. It now seems to be gone for good, not having found anyone willing to buy it.

And—wow—the recriminations are just starting.

Last September the never-profitable company announced it was shutting down and firing its employees after running out of money, providing yet another example of a tech firm that faltered after failing to deliver on overhyped promises.

The firm’s crash has left behind a trail of bitter investors and former employees who remain angry at the company and believe it misled them about its financial health.

IronNet’s rise and fall also raises questions about the judgment of its well-credentialed leaders, a who’s who of the national security establishment. National security experts, former employees and analysts told The Associated Press that the firm collapsed, in part, because it engaged in questionable business practices, produced subpar products and services, and entered into associations that could have left the firm vulnerable to meddling by the Kremlin.

“I’m honestly ashamed that I was ever an executive at that company,” said Mark Berly, a former IronNet vice president. He said the company’s top leaders cultivated a culture of deceit “just like Theranos,” the once highly touted blood-testing firm that became a symbol of corporate fraud.

There has been one lawsuit. Presumably there will be more. I’m sure Alexander got plenty rich off his NSA career.

Worse Than FailureCodeSOD: JaphpaScript

Let's say you have a web application, and you need to transfer some data that exists in your backend, server-side, down to the front-end, client-side. If you're a normal person, you have the client do an HTTP request and return the data in something like a JSON format.

You could certainly do that. Or, you could do what Alicia's predecessor did.

<script>
    var i;
    var j;
    var grpID;
    var group_Arr_<?php echo $varNamePrefix;?>= new Array();
    var user_Arr_<?php echo $varNamePrefix;?>= new Array();
    <?php
    $i = 0;
    if(is_array($groupArr)) {
        foreach($groupArr as $groupData) {
            $t_groupID = $groupData[0];
            if(is_array($userArr[$t_groupID] ?? null)) { ?>
                i = '<?php echo $i; ?>';
                grpID = '<?php echo $t_groupID; ?>';
                group_Arr_<?php echo $varNamePrefix;?>[i] = '<?php echo $t_groupID; ?>';
                user_Arr_<?php echo $varNamePrefix;?>[grpID] = new Array();
                <?php for($j = 0,$jMax = count($userArr[$t_groupID]); $j < $jMax; $j++) { ?>
                    j = '<?php echo $j; ?>';
                    user_Arr_<?php echo $varNamePrefix;?>[grpID][j] = '<?php echo $userArr[$t_groupID][$j][0]; ?>';
                    <?php
                }
                $i++;
            }
        }
    }
    ?>
</script>

Here, we have PHP and JavaScript mixed together, like chocolate and peanut butter, except neither is chocolate or peanut butter and neither represents something you'd want to be eating.

Here we have loop unrolling taken to a new, ridiculous extent. The loop is executed in PHP, and "rendered" in JavaScript, outputting a huge pile of array assignments.

Worse than that, even the name of the variable is generated in PHP- group_Arr_<?php echo $varNamePrefix;?>.

This pattern was used everywhere, and sometimes I wouldn't even call it a pattern- huge blocks of code were copy/pasted with minor modifications.

This pile of spaghetti was, as you can imagine, difficult to understand or modify. But here's the scary part: it was remarkably bug free. The developer responsible for this had managed to do this everywhere, and it worked. Reliably. Any other developer who tried to change it ended up causing a cascade of failures that meant weeks of debugging to make what felt like should be minor changes, but in the state which ALicia inherited it, everything worked. Somehow.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. ProGet costs less than half of Artifactory and is just as good. Our easy-to-read comparison page lays out the editions, features, and pricing of the different editions of ProGet and Artifactory.Learn More.

365 TomorrowsAlien Laughs Last

Author: Susan Jensen Sweeting Pelcretuche searched for his Xanax, grateful for all six of his tentacles, since he couldn’t for the life of him, remember in which pouch he had put it. Finally, his twelfth suction cup latched on to the shaky little bottle in the pouch just below his left belly button. Thank God. […]

The post Alien Laughs Last appeared first on 365tomorrows.

,

Krebs on SecurityPatch Tuesday, October 2024 Edition

Microsoft today released security updates to fix at least 117 security holes in Windows computers and other software, including two vulnerabilities that are already seeing active attacks. Also, Adobe plugged 52 security holes across a range of products, and Apple has addressed a bug in its new macOS 15Sequoia” update that broke many cybersecurity tools.

One of the zero-day flaws — CVE-2024-43573 — stems from a security weakness in MSHTML, the proprietary engine of Microsoft’s Internet Explorer web browser. If that sounds familiar it’s because this is the fourth MSHTML vulnerability found to be exploited in the wild so far in 2024.

Nikolas Cemerikic, a cybersecurity engineer at Immersive Labs, said the vulnerability allows an attacker to trick users into viewing malicious web content, which could appear legitimate thanks to the way Windows handles certain web elements.

“Once a user is deceived into interacting with this content (typically through phishing attacks), the attacker can potentially gain unauthorized access to sensitive information or manipulate web-based services,” he said.

Cemerikic noted that while Internet Explorer is being retired on many platforms, its underlying MSHTML technology remains active and vulnerable.

“This creates a risk for employees using these older systems as part of their everyday work, especially if they are accessing sensitive data or performing financial transactions online,” he said.

Probably the more serious zero-day this month is CVE-2024-43572, a code execution bug in the Microsoft Management Console, a component of Windows that gives system administrators a way to configure and monitor the system.

Satnam Narang, senior staff research engineer at Tenable, observed that the patch for CVE-2024-43572 arrived a few months after researchers at Elastic Security Labs disclosed an attack technique called GrimResource that leveraged an old cross-site scripting (XSS) vulnerability combined with a specially crafted Microsoft Saved Console (MSC) file to gain code execution privileges.

“Although Microsoft patched a different MMC vulnerability in September (CVE-2024-38259) that was neither exploited in the wild nor publicly disclosed,” Narang said. “Since the discovery of CVE-2024-43572, Microsoft now prevents untrusted MSC files from being opened on a system.”

Microsoft also patched Office, Azure, .NET, OpenSSH for Windows; Power BI; Windows Hyper-V; Windows Mobile Broadband, and Visual Studio. As usual, the SANS Internet Storm Center has a list of all Microsoft patches released today, indexed by severity and exploitability.

Late last month, Apple rolled out macOS 15, an operating system update called Sequoia that broke the functionality of security tools made by a number of vendors, including CrowdStrike, SentinelOne and Microsoft. On Oct. 7, Apple pushed an update to Sequoia users that addresses these compatibility issues.

Finally, Adobe has released security updates to plug a total of 52 vulnerabilities in a range of software, including Adobe Substance 3D Painter, Commerce, Dimension, Animate, Lightroom, InCopy, InDesign, Substance 3D Stager, and Adobe FrameMaker.

Please consider backing up important data before applying any updates. Zero-days aside, there’s generally little harm in waiting a few days to apply any pending patches, because not infrequently a security update introduces stability or compatibility issues. AskWoody.com usually has the skinny on any problematic patches.

And as always, if you run into any glitches after installing patches, leave a note in the comments; chances are someone else is stuck with the same issue and may have even found a solution.

Planet DebianThorsten Alteholz: My Debian Activities in September 2024

FTP master

This month I accepted 441 and rejected 29 packages. The overall number of packages that got accepted was 448.

I couldn’t believe my eyes, but this month I really accepted the same number of packages as last month.

Debian LTS

This was my hundred-twenty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [unstable] libcupsfilters security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [unstable] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [unstable] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DSA 5778-1] prepared package for cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DSA 5779-1] prepared package for cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DLA 3904-1] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers

Despite the announcement the package libppd in Debian is not affected by the CVEs related to CUPS. By pure chance there is an unrelated package with the same name in Debian. I also answered some question about the CUPS related uploads. Due to the CUPS issues, I postponed my work on other packages to October.

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fourth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1186-1]cups-filters security update for two CVEs in Stretch and Buster to fix the IPP attribute related CVEs.
  • [ELA-1187-1]cups-filters security update for one CVE in Jessie to fix the IPP attribute related CVEs (the version in Jessie was not affected by the other CVE).

I also started to work on updates for cups in Buster, Stretch and Jessie, but their uploads will happen only in October.

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded …

  • libcupsfilters to also fix a dependency and autopkgtest issue besides the security fix mentioned above.
  • splix for a new upstream version. This package is managed now by OpenPrinting.

Last but not least I tried to prepare an update for hplip. Unfortunately this is a nerve-stretching task and I need some more time.

This work is generously funded by Freexian!

Debian Matomo

This month I even found some time to upload packages that are dependencies of Matomo …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Most of the uploads were related to package migration to testing. As some of them are in non-free or contrib, one has to build all binary versions. From my point of view handling packages in non-free or contrib could be very much improved, but well, they are not part of Debian …

Anyway, starting in December there is an Outreachy project that takes care of automatic updates of these packages. So hopefully it will be much easier to keep those package up to date. I will keep you informed.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I did source uploads of all the packages that were prepared last month by Nathan and started the transition. It went rather smooth except for a few packages where the new version did not propagate to the tracker and they got stuck in old failing autopkgtest. Anyway, in the end all packages migrated to testing.

I also uploaded new upstream releases or fixed bugs in:

misc

This month I uploaded new upstream or bugfix versions of:

Most of those uploads were needed to help packages to migrate to testing.

LongNowNils Gilman

Nils Gilman

Disrupting traditional western understandings of time that separate human history from natural history, Planetary Temporality recognizes that these two modes of history are now inseparable, and that meeting planetary challenges will require that we go beyond our lived experience of time, to think instead in terms of our deep-time embeddedness in the Earth system.

In contrast to anthropocentric “global” issues, “planetary” issues such as climate change and biodiversity operate on vastly different and ultimately ahuman timescales - encompassing geological epochs, evolutionary processes, and the deep history of life on Earth. How can we incorporate these larger, longer systems into our human experience of the planet, and make wiser choices now that support the flourishing of all life for millennia to come?

Planet DebianSteinar H. Gunderson: Pimp my SV08

The Sovol SV08 is a 3D printer which is a semi-assembled clone of Voron 2.4, an open-source design. It's not the cheapest of printers, but for what you get, it's extremely good value for money—as long as you can deal with certain, err, quality issues.

Anyway, I have one, and one of the fun things about an open design is that you can switch out things to your liking. (If you just want a tool, buy something else. Bambu P1S, for instance, if you can live with a rather closed ecosystem. It's a bit like an iPhone in that aspect, really.) So I've put together a spreadsheet with some of the more common choices:

Pimp my SV08

It doesn't contain any of the really difficult mods, and it also doesn't cover pure printables. And none of the dreaded macro stuff that people seem to be obsessing over (it's really like being in the 90s with people's mIRC scripts all over again sometimes :-/), except where needed to make hardware work.

Cryptogram Deebot Robot Vacuums Are Using Photos and Audio to Train Their AI

An Australian news agency is reporting that robot vacuum cleaners from the Chinese company Deebot are surreptitiously taking photos and recording audio, and sending that data back to the vendor to train their AIs.

Ecovacs’s privacy policy—available elsewhere in the app—allows for blanket collection of user data for research purposes, including:

  • The 2D or 3D map of the user’s house generated by the device
  • Voice recordings from the device’s microphone
  • Photos or videos recorded by the device’s camera

It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs.

No word on whether the recorded audio is being used to train the vacuum in some way, or whether it is being used to train a LLM.

Slashdot thread.

Planet DebianDebian Brasil: Testing feed in English

Testing the feed in English and check If it's going to Debian Planet.

Sorry the noise :-)

365 TomorrowsO Death, Where is thy Sting?

Author: Bill Cox I know you came here to be entertained, to read a slice of sci-fi, but I’ve no choice. What you’re about to read is the horrifying truth. I’ve tried posting it elsewhere, on message boards and forums across the internet, but they get me every time. You might think that the internet […]

The post O Death, Where is thy Sting? appeared first on 365tomorrows.

,

Planet DebianReproducible Builds: Reproducible Builds in September 2024

Welcome to the September 2024 report from the Reproducible Builds project!

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. New binsider tool to analyse ELF binaries
  2. Unreproducibility of GHC Haskell compiler “95% fixed”
  3. Mailing list summary
  4. Towards a 100% bit-for-bit reproducible OS…
  5. Two new reproducibility-related academic papers
  6. Distribution work
  7. diffoscope
  8. Other software development
  9. Android toolchain core count issue reported
  10. New Gradle plugin for reproducibility
  11. Website updates
  12. Upstream patches
  13. Reproducibility testing framework

New binsider tool to analyse ELF binaries

Reproducible Builds developer Orhun Parmaksız has announced a fantastic new tool to analyse the contents of ELF binaries. According to the project’s README page:

Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!

More information about Binsider’s features and how it works can be found within Binsider’s documentation pages.


Unreproducibility of GHC Haskell compiler “95% fixed”

A seven-year-old bug about the nondeterminism of object code generated by the Glasgow Haskell Compiler (GHC) received a recent update, consisting of Rodrigo Mesquita noting that the issue is:

95% fixed by [merge request] !12680 when -fobject-determinism is enabled. []

The linked merge request has since been merged, and Rodrigo goes on to say that:

After that patch is merged, there are some rarer bugs in both interface file determinism (eg. #25170) and in object determinism (eg. #25269) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.


Mailing list summary

On our mailing list this month:

  • Fay Stegerman let everyone know that she started a thread on the Fediverse about the problems caused by unreproducible zlib/deflate compression in .zip and .apk files and later followed up with the results of her subsequent investigation.

  • Long-time developer kpcyrd wrote that “there has been a recent public discussion on the Arch Linux GitLab [instance] about the challenges and possible opportunities for making the Linux kernel package reproducible”, all relating to the CONFIG_MODULE_SIG flag. []

  • Bernhard M. Wiedemann followed-up to an in-person conversation at our recent Hamburg 2024 summit on the potential presence for Reproducible Builds in recognised standards. []

  • Fay Stegerman also wrote about her worry about the “possible repercussions for RB tooling of Debian migrating from zlib to zlib-ng” as reproducibility requires identical compressed data streams. []

  • Martin Monperrus wrote the list announcing the latest release of maven-lockfile that is designed aid “building Maven projects with integrity”. []

  • Lastly, Bernhard M. Wiedemann wrote about potential role of reproducible builds in combatting silent data corruption, as detailed in a recent Tweet and scholarly paper on faulty CPU cores. []


Towards a 100% bit-for-bit reproducible OS…

Bernhard M. Wiedemann began writing on journey towards a 100% bit-for-bit reproducible operating system on the openSUSE wiki:

This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE’s] minimalVM image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.

This work was sponsored by NLnet’s NGI Zero fund.


Marvin Strangfeld published his bachelor thesis, “Reproducibility of Computational Environments for Software Development” from RWTH Aachen University. The author offers a more precise theoretical definition of computational environments compared to previous definitions, which can be applied to describe real-world computational environments. Additionally, Marvin provide a definition of reproducibility in computational environments, enabling discussions about the extent to which an environment can be made reproducible. The thesis is available to browse or download in PDF format.

In addition, Shenyu Zheng, Bram Adams and Ahmed E. Hassan of Queen’s University, ON, Canada have published an article on “hermeticity” in Bazel-based build systems:

A hermetic build system manages its own build dependencies, isolated from the host file system, thereby securing the build process. Although, in recent years, new artifact-based build technologies like Bazel offer build hermeticity as a core functionality, no empirical study has evaluated how effectively these new build technologies achieve build hermeticity. This paper studies 2,439 non-hermetic build dependency packages of 70 Bazel-using open-source projects by analyzing 150 million Linux system file calls collected in their build processes. We found that none of the studied projects has a completely hermetic build process, largely due to the use of non-hermetic top-level toolchains. []


Distribution work

In Debian this month, 14 reviews of Debian packages were added, 12 were updated and 20 were removed, all adding to our knowledge about identified issues. A number of issue types were updated as well. [][]

In addition, Holger opened 4 bugs against the debrebuild component of the devscripts suite of tools. In particular:

  • #1081047: Fails to download .dsc file.
  • #1081048: Does not work with a proxy.
  • #1081050: Fails to create a debrebuild.tar.
  • #1081839: Fails with E: mmdebstrap failed to run error.

Last month, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again.

In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading version 278 to Debian:

  • New features:

    • Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to “fuzzy-match” away the leading directory.  []
  • Bug fixes:

    • Drop removal of calculated os.path.basename from GNU readelf output. []
    • Correctly invert “X% similar” value and do not emit “100% similar”. []
  • Misc:

    • Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (via #1057532). (#1082636)
    • Update copyright years. []

For trydiffoscope, the command-line client for the web-based version of diffoscope, Chris Lamb also:

  • Added an explicit python3-setuptools dependency. (#1080825)
  • Bumped the Standards-Version to 4.7.0. []


Other software development

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, version 0.5.11-4 was uploaded to Debian unstable by Holger Levsen making the following changes:

  • Replace build-dependency on the obsolete pkg-config package with one on pkgconf, following a Lintian check. []
  • Bump Standards-Version field to 4.7.0, with no related changes needed. []


In addition, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, version 0.7.28 was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes Python module to shlex, as the former will be removed in Python version 3.13 [].


Android toolchain core count issue reported

Fay Stegerman reported an issue with the Android toolchain where a part of the build system generates a different classes.dex file (and thus a different .apk) depending on the number of cores available during the build, thereby breaking Reproducible Builds:

We’ve rebuilt [tag v3.6.1] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:

  • With 2 and 4 cores we always get an unsigned APK with SHA-256 14763d682c9286ef….
  • With 6, 8, and 16 cores we get an unsigned APK with SHA-256 35324ba4c492760… instead.


New Gradle plugin for reproducibility

A new plugin for the Gradle build tool for Java has been released. This easily-enabled plugin results in:

reproducibility settings [being] applied to some of Gradle’s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.


Website updates

There were a rather substantial number of improvements made to our website this month, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In September, a number of changes were made by Holger Levsen, including:

  • Debian-related changes:

    • Upgrade the osuosl4 node to Debian trixie in anticipation of running debrebuild and rebuilderd there. [][][]
    • Temporarily mark the osuosl4 node as offline due to ongoing xfs_repair filesystem maintenance. [][]
    • Do not warn about (very old) broken nodes. []
    • Add the risc64 architecture to the multiarch version skew tests for Debian trixie and sid. [][][]
    • Mark the virt{32,64}b nodes as down. []
  • Misc changes:

    • Add support for powercycling OpenStack instances. []
    • Update the fail2ban to ban hosts for 4 weeks in total [][] and take care to never ban our own Jenkins instance. []

In addition, Vagrant Cascadian recorded a disk failure for the virt32b and virt64b nodes [], performed some maintenance of the cbxi4a node [][] and marked most armhf architecture systems as being back online.



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram China Possibly Hacking US “Lawful Access” Backdoor

The Wall Street Journal is reporting that Chinese hackers (Salt Typhoon) penetrated the networks of US broadband providers, and might have accessed the backdoors that the federal government uses to execute court-authorized wiretap requests. Those backdoors have been mandated by law—CALEA—since 1994.

It’s a weird story. The first line of the article is: “A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers.” This implies that the attack wasn’t against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers.

For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers.

Other news stories.

Cryptogram Auto-Identification Smart Glasses

Two students have created a demo of a smart-glasses app that performs automatic facial recognition and then information lookups. Kind of obvious, but the sort of creepy demo that gets attention.

News article.

Cryptogram Largest Recorded DDoS Attack is 3.8 Tbps

CLoudflare just blocked the current record DDoS attack: 3.8 terabits per second. (Lots of good information on the attack, and DDoS in general, at the link.)

News article.

365 TomorrowsTolerate

Author: Julian Miles, Staff Writer George is waving his arms about again: never a good sign. Neela catches my eye and nods towards him, raising her eyebrows and frowning. Receiving the ‘sort it out’ message loud and clear, I take a last drag, then stub out my smoke. His voice fades in as I approach. […]

The post Tolerate appeared first on 365tomorrows.

Cory DoctorowSpill, part one (a Little Brother story)

Will Staehle's cover for 'Spill': a white star on an aqua background; a black stylized fist rises out of the star with a red X over its center.

This week on my podcast, I read part one of “Spill“, a new Little Brother story commissioned by Clay F Carlson and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original.

Doctors smoke. Driving instructors text and drive. Dentists eat sugary snacks before bed. And hackers? Well, we’re no better at taking our own advice than anyone else.

Take “There is no security in obscurity”—if a security system only works when your enemies don’t understand it, then your security system doesn’t work.

A couple of years ago, I decided I wanted to move off the cloud. “There’s no such thing as the cloud, there’s only other peoples’ computers.” If you trust Google (or Apple, or, God help you, Amazon to host your stuff, well, let’s just say I don’t think you’ve thought this one through, pal).

I Am Good at Nerd, and managing a server for my own email and file transfers and streaming media didn’t seem that hard. I’d been building PCs since I was fifteen. I even went through a phase where I built my own laptops, so why couldn’t I just build myself a monster-ass PC with stupid amounts of hard drives and RAM and find a data center somewhere that would host it?


MP3

Planet DebianReproducible Builds (diffoscope): diffoscope 279 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 279. This version includes the following changes:

[ Chris Lamb ]
* Drop removal of calculated basename from readelf output.
  (Closes: reproducible-builds/diffoscope#394)

You find out more by visiting the project homepage.

,

Planet DebianBits from Debian: Bits from the DPL

Dear Debian community,

this are my bits from DPL for September.

New lintian maintainer

I'm pleased to welcome Louis-Philippe Véronneau as a new Lintian maintainer. He humorously acknowledged his new role, stating, "Apparently I'm a Lintian maintainer now". I remain confident that we can, and should, continue modernizing our policy checker, and I see this as one important step toward that goal.

SPDX name / license tools

There was a discussion about deprecating the unique names for DEP-5 and migrating to fully compliant SPDX names.

Simon McVittie wrote: "Perhaps our Debian-specific names are better, but the relevant question is whether they are sufficiently better to outweigh the benefit of sharing effort and specifications with the rest of the world (and I don't think they are)." Also Charles Plessy sees the value of deprecating the Debian ones and align on SPDX.

The thread on debian-devel list contains several practical hints for writing debian/copyright files.

proposal: Hybrid network stack for Trixie

There was a very long discussion on debian-devel list about the network stack on Trixie that started in July and was continued in end of August / beginning of September. The discussion was also covered on LWN. It continued in a "proposal: Hybrid network stack for Trixie" by Lukas Märdian.

Contacting teams

I continued reaching out to teams in September. One common pattern I've noticed is that most teams lack a clear strategy for attracting new contributors. Here's an example snippet from one of my outreach emails, which is representative of the typical approach:

Q: Do you have some strategy to gather new contributors for your team? A: No. Q: Can I do anything for you? A: Everything that can help to have more than 3 guys :-D

Well, only the first answer, "No," is typical. To help the JavaScript team, I'd like to invite anyone with JavaScript experience to join the team's mailing list and offer to learn and contribute. While I've only built a JavaScript package once, I know this team has developed excellent tools that are widely adopted by others. It's an active and efficient team, making it a great starting point for those looking to get involved in Debian. You might also want to check out the "Little tutorial for JS-Team beginners".

Given the lack of a strategy to actively recruit new contributors--a common theme in the responses I've received--I recommend reviewing my talk from DebConf23 about teams. The Debian Med team would have struggled significantly in my absence (I've paused almost all work with the team since becoming DPL) if I hadn't consistently focused on bringing in new members. I'm genuinely proud of how the team has managed to keep up with the workload (thank you, Debian Med team!). Of course, onboarding newcomers takes time, and there's no guarantee of long-term success, but if you don't make the effort, you'll never find out.

OS underpaid

The Register, in its article titled "Open Source Maintainers Underpaid, Swamped by Security, Going Gray", summarizes the 2024 State of the Open Source Maintainer Report. I find this to be an interesting read, both in general and in connection with the challenges mentioned in the previous paragraph about finding new team members.

Kind regards Andreas.

365 TomorrowsOn the Plane

Author: Joann Yu A woman sat on a plane next to a man. He had blond hair tied in a tiny bud, wore a blue sweatshirt, and a black mask covered half of his face. She didn’t know if he had blue eyes. She didn’t dare to look at his face. On the plane, she […]

The post On the Plane appeared first on 365tomorrows.

,

David BrinThe dangerous chimera called 'scientism'

The crusade to discredit all fact-using professions is an existential threat to us all -- a deliberate effort to lobotomize-away any influence by folks who actually know stuff.  

One of the core elements of this campaign is to deride modern science as a 'mere religion'. A religion called "scientism'. That cult incantation - aiming to cancel out all nerds and every kind of 'expert' - is promoted in this article.

One raver, denouncing Scientific America's endorsement of pro-fact candidates, said: 

"...worshippers at this new altar seem determined to usher in a new post-modern utopia in which science and religion are fused once again. In that light, they cannot help but endorse Kamala Harris because their consciences won’t allow them to do otherwise. It’s not a choice dictated by science, but by theology."

Parse it. The fundamental goal is to demean fact-professions by their own standards, by calling them (without any hint of evidence, or irony) mere boffin-lemmings, yelping in unison as they worship the current paradigm and repress dissenting views.

Of course this is the masturbation-incantation of morons who know nothing about how science works, but desperately seek to justify their war against it. To which I routinely reply:

"Step up now with $$$ wager stakes.
Let's start by forming an eclectic group to visit the research university nearest to you. 
There we'll knock on twenty random doors, to see if even one person matches your egregiously dumb and insulting slander toward those who strive to advance understanding of the universe.*

"If you knew any scientists at all, you'd know we are the most COMPETITIVE beings that this species - that this planet - ever produced. A young scientist only gets anywhere by finding some corner of a standard model and poking at it until something gives. And thus the model improves... or else gets replaced.

"In fact RIGHT NOW I demand that you name a fact-based profession that is not warred upon by Fox n' pals. Go on, name one. One fact profession whose members aren't mass-fleeing your mad cult. (I can name one, but can you?)

"Not just science but also medicine and law and civil service, ranging all the way to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. *

"The latter, mostly lifelong republicans, can now see that the Republican Party has become a Kremlin-serving treason cult. Few of those current and former officers have become committed Democrats! But almost all have left the GOP madness in disgust and taken long showers."


*(And yes, as we enter the danger days of October surprises, even possible Reichstag fires and McVeighsions, we need to pray for the skilled dedication and good work of 80,000 fine men and women heroes in the F... B...I...)


== The underlying narrative of the 'scientism' schtick ==

The entire premise of this campaign to discredit every center of influence other than oligarchy is simple. It boils down to smart people are stupid. 

Parse it. Sure, we all know that:

"High intelligence and knowledge don't automatically make you wise." 

That's a truth we all understand. 

But today's Kremlin-led, foxite cult to sabotage the West has converted that truth into the following lie: 

"High intelligence and knowledge automatically make you unwise."

When it is parsed that purely, they always shrink back and deny it. But they also know that psychotically rephrased version is exactly the campaign pushed by the entire Fox-o-Sphere cult, in their war vs. all fact using professions. 

In their relentless yammer campaigns against universities! The flawed jewels built by the GI Bill generation, that have been responsible for most of the wonders that - for 80 years - truly made America great..


== The current U.S. struggle ==

 

Again, want super-strong evidence that the Republican Party has changed and been hijacked? Well, there are the Cheneys… and hundreds of former GOP officials, including almost every high ‘adult in the room’ during Trump’s Presidency, who have issued public denunciations with signature pages ten or more pages long. 


(There will be NO such 'adults in the room,' during any Trump II.)


And then there’s this…

…showing how likely it is that Ronald Reagan would despise today’s Republican love affair with the same ‘evil empire’ - 
(slightly relabeled) - that he fought against.  I show jpegs of Reagan’s own 1970 re-election flyer. Only jeepers, look at how progressive and liberal he was on so many issues, compared to today’s open Confederates. Golly.



== And finally, pictures are more persuasive! ==

 

I worked hard on this image, which encapsulates in one montage a partial panoply of deplorables who are best buds with Donald Trump. It has so much content, you may need to copy and expand, before you share it around. But in this case the sheer number makes it hard for residually sane Republicans (and we all know a couple) to shrug off. 


And peeling away just half a million such residual decents is really all we need. So use this!



Get them to look at the gloating faces of Trump & Lavrov & Kisliak in January 2017, when they were DT's first and most-beloved guests in the Oval Office, long before any ally, giggling that the USA had fallen to them so perfectly.


 Look at the faces up close and read the caption. And remember Trump raving that he "fell in love' with Kim Jong Un. 


 This is why almost all of DT's former national security folks, from Defense and State to intel agencies to serving officers have called him a direct threat to the nation. But that won't last if he's elected. Those folks will all be arrested and silenced. There will be no further 'adults in the room."

 But we proved resilient.  And we will, yet again. This renaissance is just beginning.

365 TomorrowsEternal Dissolution

Author: Liv The need to write has become urgent. My thoughts, once manageable, are now turbulent, like the incessant ticking of a clock, warning of something terrible. I haven’t slept in days, and I bite my nails to the flesh. The cause of my horror is real. My name is Carmélia, 26, and though there’s […]

The post Eternal Dissolution appeared first on 365tomorrows.

,

Planet DebianJonathan Dowland: synths

Although I've never written about them, I've been interested in music synthesisers for ages. My colleagues know this. Whilst I've been off sick, they had a whip-round and bought me a voucher for Andertons, a UK-based music store, to cheer me up.

I'm absolutely floored by this generosity. And so, I'm now on a quest to buy a synthesizer! Although, not my first one.

Alesis Micron on my desk, taunting me

Alesis Micron on my desk, taunting me

I bought my first synth, an Alesis Micron, from a colleague at $oldjob, 16 years ago. For various reasons, I've struggled to engage with it, and it's mostly been gathering dust on my desk in all that time. (I might write more about the Micron in a later blog post). "Bad Gear" sums it up better than I could:

So, I'm not truly buying my "first" synth, but for all intents and purposes I'm on a similar journey to if I was, and I thought it might be fun to write about it.

Goals

I want something which has as many of its parameters presented physically, as knobs or sliders etc., as possible. One reason I've failed to engage with the Micron (so far) is it's at the other end of this spectrum, with hundreds of tunable parameters but a small handful of knobs. To change parameters you have to go diving into menus presented on a really old-fashioned, small LCD display. If you know what you are looking for, you can probably find it; but if you just want to experiment and play around, it's off-putting.

Secondly, I want something I can use away from a computer, as much as possible. Computers are my day-job, largely dominate my existing hobbies, and are unavoidable even in some of the others (like 3d printing). Most of the computers I interact with run Linux. And for all its strengths, audio management is not one of them. If I'm going to carve out some of my extremely limited leisure time to explore this stuff, I don't to spend any of it (at least now) fighting Pulseaudio/ALSA/Pipewire/JACK/OSS/whatever, or any of the other foibles that might crop up1.

Thirdly, I'd like something which, in its soul, is an instrument. You can get some amazing little synth boxes with a huge number of features in them. Something with a limited number of features but which really feels well put together would suit me better.

So… next time, I'll write about the 2-3 top candidates on my list. Can you guess what they might be?


  1. To give another example. The other day I sat down to try and use the Micron, which had its audio out wired into an external audio interface, in turn plugged into my laptop's Thunderbolt dock. For a while I couldn't figure out why I couldn't hear anything, until I realised the Thunderbolt dock was having "a moment" and not presenting its USB devices to the laptop. Hobby time window gone!

365 TomorrowsInto the fold

Author: R. J. Erbacher Scott heard the girl’s scream and thought, ‘Oh crap, not again.’ He was taking out the boxes from the new gadgets he had bought for his recently acquired apartment, bringing them into the alley to be disposed of properly. Scott figured it was still too early in the night for the […]

The post Into the fold appeared first on 365tomorrows.

Planet DebianBits from Debian: Debian welcomes Freexian as our newest partner!

Freexian logo

We are excited to announce and welcome Freexian into Debian Partners.

Freexian specializes in Free Software with a particular focus on Debian GNU/Linux. Freexian can assist with consulting, training, technical support, packaging, or software development on projects involving use or development of Free software.

All of Freexian's employees and partners are well-known contributors in the Free Software community, a choice that is integral to Freexian's business model.

About the Debian Partners Program

The Debian Partners Program was created to recognize companies and organizations that help and provide continuous support to the project with services, finances, equipment, vendor support, and a slew of other technical and non-technical services.

Partners provide critical assistance, help, and support which has advanced and continues to further our work in providing the 'Universal Operating System' to the world.

Thank you Freexian!

,

Cryptogram AI and the 2024 US Elections

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This essay was written with Nathan Sanders, and originally appeared in The Atlantic.

Krebs on SecurityA Single Cloud Compromise Can Feed an Army of AI Sex Bots

Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.

Image: Shutterstock.

Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled full logging of LLM activity (by default, logs don’t include model prompts and outputs), and thus they lacked any visibility into what attackers were doing with that access.

So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.

Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.

“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.

“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”

Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.

“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”

Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.

“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”

AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.

“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.

In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”

Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.

Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”

Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”

Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.

Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:

Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.

Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.

KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.

AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.

Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.

Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.

Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.

The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.

Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Most of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key.

Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.

“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”

In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”

“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”

AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.

The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.

Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to jailbreaks.

“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”

Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.

Update: 5:01 p.m. ET: Chub has issued a statement saying they are only hosting the role-playing characters, and that the LLMs they use run on their own infrastructure.

“Our own LLMs run on our own infrastructure,” Chub wrote in an emailed statement. “Any individuals participating in such attacks can use any number of UIs that allow user-supplied keys to connect to third-party APIs. We do not participate in, enable or condone any illegal activity whatsoever.”

365 TomorrowsNew Mutant

Author: Mark Renney The moment is almost here. At last, after all the speculation and rumour, the grand reveal. A cage has been wheeled onto the stage, sitting at its centre, covered by a white sheet, pristine and perfect. Everyone is certain that, when the cover is pulled away, it will be intricate and ornate […]

The post New Mutant appeared first on 365tomorrows.

Planet DebianMike Gabriel: Creating (a) new frontend(s) for Polis

After (quite) a summer break, here comes the 4th article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Have fun with the read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series

  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis (this article)
  5. Current status and roadmap

4. Creating (a) new frontend(s) for Polis

Why a new frontend was needed...

Our initial experiences of working with Polis, the effort required to implement more invasive changes and the desire of iterating changes more rapidly ultimately lead to the decision to create a new foundation for frontend development that would be independent of but compatible with the upstream project.

Our primary objective was thus not to develop another frontend but rather to make frontend development more flexible and to facilitate experimentation and rapid prototyping of different frontends by providing abstraction layers and building blocks.

This also implied developing a corresponding backend since the Polis backend is tightly coupled to the frontend and is neither intended to be used by third-party projects nor supporting cross-domain requests due to the expectation of being embedded as an iframe on third-party websites.

The long-term plan for achieving our objectives is to provide three abstraction layers for building frontends:

  • a stable cross-domain HTTP API
  • a low-level JavaScript library for interacting with the HTTP API
  • a high-level library of WebComponents as a framework-neutral way of rapidly building frontends

The Particiapp Project

Under the umbrella of the Particiapp project we have so far developed two new components:

  • the Particiapi server which provides the HTTP API
  • the example frontend project which currently contains both the client library and an experimental example frontend built with it

Both the participation frontend and backend are fully compatible and require an existing Polis installation and can be run alongside the upstream frontend. More specifically, the administration frontend and common backend are required to administrate conversations and send out notifications and the statistics processing server is required for processing the voting results.

Particiapi server

For the backend the Python language and the Flask framework were chosen as a technological basis mainly due to developer mindshare, a large community and ecosystem and the smaller dependency chain and maintenance overhead compared to Node.js/npm. Instead of integrating specific identity providers we adopted the OpenID Connect standard as an abstraction layer for authentication which allows delegating authentication either to a self-hosted identity provider or a large number of existing external identity providers.

Particiapp Example Frontend

The experimental example frontend serves both as a test bed for the client library and as a tool for better understanding the needs of frontend designers. It also features a completely redesigned user interface and results visualization in line with our goals. Branded variants are currently used for evaluation and testing by the stakeholders.

In order to simplify evaluation, development, testing and deployment a Docker Compose configuration is made available which contains all necessary components for running Polis with our experimental example frontend. In addition, a development environment is provided which includes a preconfigured OpenID Connect identity provider (KeyCloak), SMTP-Server with web interface (MailDev), and a database frontend (PgAdmin). The new frontend can also be tested using our public demo server.

Cryptogram Weird Zimbra Vulnerability

Hackers can execute commands on a remote computer by sending malformed emails to a Zimbra mail server. It’s critical, but difficult to exploit.

In an email sent Wednesday afternoon, Proofpoint researcher Greg Lesnewich seemed to largely concur that the attacks weren’t likely to lead to mass infections that could install ransomware or espionage malware. The researcher provided the following details:

  • While the exploitation attempts we have observed were indiscriminate in targeting, we haven’t seen a large volume of exploitation attempts
  • Based on what we have researched and observed, exploitation of this vulnerability is very easy, but we do not have any information about how reliable the exploitation is
  • Exploitation has remained about the same since we first spotted it on Sept. 28th
  • There is a PoC available, and the exploit attempts appear opportunistic
  • Exploitation is geographically diverse and appears indiscriminate
  • The fact that the attacker is using the same server to send the exploit emails and host second-stage payloads indicates the actor does not have a distributed set of infrastructure to send exploit emails and handle infections after successful exploitation. We would expect the email server and payload servers to be different entities in a more mature operation.
  • Defenders protecting Zimbra appliances should look out for odd CC or To addresses that look malformed or contain suspicious strings, as well as logs from the Zimbra server indicating outbound connections to remote IP addresses.

,

Worse Than FailureCodeSOD: Join or Die

Seuf sends us some old code, which entered production in 2011. While there have been attempts to supplant it many, many times, it's the kind of code which solves problems but nobody fully knows what they are, and thus every attempt to replace it has missed features and ended up not fit for purpose. That the tool is unmaintainable, buggy, and slow? Well, so it goes.

Today's snippet is Perl:

my $query = "SELECT id FROM admin_networks WHERE id='8' or id='13' or id='14' or id='16' or id='22' or id='26' or id='27' or id='23' or id='40' or id='39' or id='33' or id='31'";
my $sth = $dbh->prepare($query);
$sth->execute or die "Error : $DBI::errstr\n";
while(my $id_network=$sth->fetchrow_array()){
    my $query2 = "SELECT name FROM admin_routeurs where networkid='$id_network'";
    my $sth2 = $dbh->prepare($query2);
    $sth2->execute or die "Error : $DBI::errstr\n";
    while(my $name=$sth2->fetchrow_array()){

        print LOG "name : $name\n";
        print FACTION "$name\n";
    }
}

Now, I have to be honest, my favorite part of Perl is the or die idiom. "Do this thing, or die." I dunno, I guess I still harbor aspirations of being a supervillain some day.

But here we have a beautiful little bit of bad code. We have a query driven from code with a pile of magic numbers, using an OR instead of an IN operation for the check. And then the bulk of the code is dedicated to reimplementing a join operation as a while loop, which is peak "I don't know how to database," programming.

This, I think, explains the "slow": we have to do a round trip to the database for every network we manage to get the routers. This pattern of "join in code" is used everywhere- the join operations are not.

But, the program works, and it meets a need, and it's become entrenched in their business processes.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsA Semblance of Bravery

Author: James Callan The holographer did more than tell us who was next on their list to be murdered, though that alone would suffice as unnerving. They didn’t mention names at all, opting for an artistic approach, something avant-garde to demonstrate their next dreadful slaughter. The holographer had their modus operandi, their eccentric, sadistic show-and-tell. […]

The post A Semblance of Bravery appeared first on 365tomorrows.

Cryptogram California AI Safety Bill Vetoed

Governor Newsom has vetoed the state’s AI safety bill.

I have mixed feelings about the bill. There’s a lot to like about it, and I want governments to regulate in this space. But, for now, it’s all EU.

(Related, the Council of Europe treaty on AI is ready for signature. It’ll be legally binding when signed, and it’s a big deal.)

,

Planet DebianRavi Dwivedi: State of the Map Conference in Kenya

Last month, I traveled to Kenya to attend a conference called State of the Map 2024 (“SotM” for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.

University of Nairobi.

I have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn’t wait to attend. The grant had an upper limit of €1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee.

Pre-travel tasks included obtaining Kenya’s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.

Our group at the conference.

The venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi’s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.

View of Nairobi's skyline from the open area on the fifth floor.

A library in Mahatma Gandhi wing of the University of Nairobi.

The diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program.

I learned about the extent of OSM use through Pragya and Rabina’s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal’s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi/NCR mapping parties.

One of the conference halls where talks took place.

Outside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as “white tea”) is usually not as strong compared to India, so I switched to coffee (which is also called “white coffee” when taken with milk). The food wasn’t spicy, but I can’t complain :) Fruit juices served as a nice addition to lunch.

One of the lunch meals served during the conference.

At the end of the second day of the conference, there was a surprise in store for us — a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff’s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn’t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.

Minibus which took us from the university to Bao Box restaurant.

This minibus in the picture gave a sense of a real matatu.

I am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn’t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa.

While most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don’t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don’t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well.

My suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it’s not ideal - as Telegram’s servers are proprietary and centralized - but it’s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea.

Finally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip.

Thanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.

Planet DebianColin Watson: Free software activity in September 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Pydantic

My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I’ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo Röhling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck.

Learning Rust is on my to-do list, but merely not knowing a language hasn’t stopped me before. So I learned how the Debian Rust team’s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing.

I’m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie!

OpenSSH

I upgraded the Debian packaging to OpenSSH 9.9p1.

YubiHSM

I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions.

I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I’d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests.

I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly.

Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed.

As usual, bookworm-backports is up to date with all these changes.

Python team

setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn’t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid.

As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits.

I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted.

buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven’t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3.

Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway.

On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request.

I upgraded some embedded CSS files in nbconvert.

I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.

debmirror

The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

Planet DebianJunichi Uekawa: Hello October.

Hello October. I've been trying to do the GPG signing from Debconf but my backlog of stuff is in my way.

Planet DebianGuido Günther: Free Software Activities September 2024

Another short status update of what happened on my side last month. Besides the usual amount of housekeeping last month was a lot about getting old issues resolved by finishing some stale merge requests and work in pogress MRs. I also pushed out the Phosh 0.42.0 Release

phosh

  • Mark mobile-data quick setting as insensitive when modem is off (MR)
  • Document handler naming (MR)
  • Phosh 0.41.1 (MR)
  • Phosh 0.42~rc1 (MR)
  • Phosh 0.42.0 (MR)
  • Handle per app notification enable setting (MR) (a 3y old MR cleaned up and out of the way)
  • Use parent's icon if child doesn't have one (MR (another 1y old MR moved out of draft status)
  • Fix Rust build and upcoming events .plugin file (MR)
  • Lint markdown (MR)
  • Sanitize versions as this otherwise breaks the libphosh-rs build (MR)
  • lockscreen: Swap deck and carousel to avoid triggering the plugins page when entering pin and let the lockscreen shrink to smaller sizes (MR) (two more year old usability issues out of the way)
  • Let bitfield values end up in the docs again (MR)
  • Don't focus incorrect app on launch (MR). This could happen with apps like calls that run a daemon (and needs more work for a clean solution).
  • Continue with wallpaper MR (MR) (still draft)
  • Brush up and land an old MR to avoid crashes on scale changes (MR). Another five month old MR out of the way.
  • API version the shared library (MR)
  • Ensure we send enough feedback when phone is blanked/locked (MR). This should be way easier now for apps as they don't need to do anything and we can avoid duplicate feedback sent from e.g. Chatty.
  • Fix possible use after free when activating notifications on the lock screen (MR)

phoc

  • Simplify layer-surface creation / destruction (MR)
  • Don't lose preedit when switching applications, opening menus, etc (MR). This fixes the case (e.g. with word completion in phosh-osk-stub enabled) where it looks to the user as if the last typed word would get lost when switching from a text editor to another app or when opening a menu
  • Ease focus debugging (MR)
  • Release 0.42~rc1 (MR)
  • Release 0.42.0 (MR)
  • Mention examples in docs and check more things (MR)

phosh-mobile-settings

  • Release 0.42~rc1 (MR)
  • Release 0.42 (MR)
  • Update ci-fairy (MR)

libphosh-rs

  • Update Phosh-0.gir with above phosh fixes to unbreak the build (MR)
  • Rework to work with API versioned libphosh (MR)

phosh-osk-stub

  • Add paste button to easy pasting text (MR)
  • Add copy button (draft) (MR)
  • Fix word salad with presage completer when entering cursor navigation mode (and in some other cases) (MR 1). Presage has the best completion but was marked experimental due to that.
  • Submit preedit on changes to terminal and emoji layout (MR)
  • Enable hint based completion by default (MR)
  • Release 0.42~r1 (MR)
  • Release 0.42.0 (MR)

phosh-wallpapers

  • Add sound for cellbroadcast (MR)
  • Release 0.42.0 (MR)

meta-phosh

  • Weekly image builds of nightly packages are now built in CI and uploaded.
  • Handle Fixes: tag in git commit messages as well (MR)
  • Let release prep handle non-RC versions as well (MR)
  • Add common markdown linter job (MR)

Debian

  • Update wlr-randr (MR)
  • Upload libqmi developement snapshot (MR) (Helps eSIM and CellBroadcast)
  • Update phosh to not crash with GSD from GNOME 47 (MR)
  • Fix systemd unit path in calls (MR)
  • Package wikietractor (MR)

ModemManager

  • More work on Cell Broadcast so we can finally undraft (MR)

Calls

  • Check consistency when building releases (MR
  • Object life cycle fixes (MR)
  • Use DBus activation (MR). This ensures it spawns quickly rather than phosh's splash screen timing out.

bluez

  • Add user unit for mpris proxy so it works out of the box (Patch) and one can skip e.g. songs in a cars media unit

gnome-text-editor

  • Wrap info-bar more (MR) to fit smalls screens
  • Forward metainfo/desktop file updates from Mobian (MR) (patch originally by Arnaud Ferraris)

feedbackd

  • Add udev rule to support haptic on Oneplus Fajita / Enchilada's (non-mailine driver) (MR)
  • Support alert-slider on OnePlus 6/6T (MR. Based on a script by "isyourbrain foss".
  • Release 0.5.0 (MR)
  • Improve spec a bit regarding notification events (MR)

Chatty

  • Don't send feedback for notifications (MR). The notification daemon does this already.
  • Add event for cellbroadcast messages (MR)
  • Switch to DBus activation (MR). This ensures the compositor sees the activation token and is will be useful for unified push.
  • Don't let scroll_down button take focus (MR). This prevents the OSK from folding when the text view is focused and ones scrolls to the bottom.
  • Use revealer to show/hide scroll_down button (MR) - just to make the visual more appealing
  • Unbreak messge display (MR)
  • Unbreak application icon (MR)
  • Drop special preedit handling (MR).

libcall-ui

  • Drop margin so we can fit on smaller screens (MR). This helps phosh on lower effective resolutions.
  • Backport margin patch (MR)

glib

  • Fix doc formatting for g_input_stream_read_all* (MR)

wlr-protocols

  • Add toplevel responsiveness state (MR) so phosh can inform about unresponsive apps

git-buildpackage

iio-sensor-proxy

  • Unbreak and modernize CI a bit (MR). A passing CI is so much more motivating for contributers and reviewers.

Fotema

  • Fix app-id and hence the icon shown in Phosh's overview (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

Worse Than FailureCodeSOD: Feeling Free

Jason started work on a C++ application doing quantitative work. The nature of the program involves allocating all sorts of blocks of memory, doing loads of complicated math, and then freeing them. Which means, there's code which looks like this:

for( i = 0; i < 6; i++ )
{
    if( h->quant4_bias[i] )
        free( h->quant4_bias[i] );
}

This isn't terribly unusual code. I have quibbles- why the magic number 6, I'd prefer the comparison against nullptr to be explicit- but this isn't the kind of code that's going to leave anybody scratching their head. If h->quant4_bias[i] is pointing to actual memory, free it.

But this is how that array is declared:

uint16_t        (*quant4_bias[4])[16];

Uh… the array has four elements in it. We free six elements. And shockingly, this doesn't crash. Why not? Well… it's because we get lucky. Here's that array declaration with a bit more context:

uint16_t        (*quant4_bias[4])[16];
uint16_t        (*quant8_bias[2])[64];

We iterate past the end of quant4_bias, but thankfully, the compiler has put quant8_bias at the next offset, and has decided to just let the [] operator just access that memory. There's no guarantee about this- this is peak undefined behavior. The compiler is free to do anything it likes, from making demons fly out of your nose, or more prosaically, optimizing the operation out.

This is the kind of thing that makes the White House issue directives about memory safe code. The absence of memory safety is a gateway to all sorts of WTFs. This one, here, is a pretty mild one, as memory bugs go.

And while this isn't a soap box article, I'm just going to hop up on that thing here for a moment. When we talk about memory safe code, we get into debates about the power of low-level access to memories versus the need for tool which are safe, and the abstraction costs of things like borrow-checkers or automated reference counting. This is a design challenge for any tool. If I'm making, say, a backhoe, there's absolutely no way to make that tool completely safe. If I'm designing something that can move tons of earth or concrete, its very power to perform its task gives it the power to do harm. We address this through multiple factors. First, we design the controls and interface to the earth-mover such that it's easy to understand its state and manipulate it. The backhoe responds to user inputs in clear, predictable, ways. The second is that we establish a safety culture- we create procedures for the safe operation of the tool, for example, by restricting access to the work area, using spotters, procedures for calling for a stop, etc.

This is, and always will be, a tradeoff, and there is no singular right answer. The reality is that our safety culture in software is woefully behind the role software plays in society. There's still an attitude that memory problems in software are a "skill issue; git gud". But that runs counter to a safety culture. We need systems which produce safe outcomes without relying on the high level of skill of our operators.

Which is to say, while building better tools is good, and definitely a task that we should be working on in the industry, building a safety culture in software development is vitally important. Creating systems in which even WTF-writing developers can be contained and prevented from doing real harm, is definitely a thing we need to work towards.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsTuesday

Author: Jeremy Belcher The beeping was coming from the back of his skull. Softly at first, then loudly, it crescendoed violently and rattled him awake. Blearily, he opened his eyes to the silhouette of an enormous, hulking machine. Its high intensity spotlights were trained on him, bleating out its aggressive, high pitched beeping. Waiting. Idling. […]

The post Tuesday appeared first on 365tomorrows.

,

Krebs on SecurityCrooked Cops, Stolen Laptops & the Ghost of UGNazi

A California man accused of failing to pay taxes on tens of millions of dollars allegedly earned from cybercrime also paid local police officers hundreds of thousands of dollars to help him extort, intimidate and silence rivals and former business partners, the government alleges. KrebsOnSecurity has learned that many of the man’s alleged targets were members of UGNazi, a hacker group behind multiple high-profile breaches and cyberattacks back in 2012.

A photo released by the government allegedly showing Iza posing with several LASD officers on his payroll.

A federal complaint (PDF) filed last week said the Federal Bureau of Investigation (FBI) has been investigating Los Angeles resident Adam Iza. Also known as “Assad Faiq” and “The Godfather,” Iza is the founder of a cryptocurrency investment platform called Zort that advertised the ability to make smart trades based on artificial intelligence technology.

But the feds say investors in Zort soon lost their shorts, after Iza and his girlfriend began spending those investments on Lamborghinis, expensive jewelry, vacations, a $28 million home in Bel Air, even cosmetic surgery to extend the length of his legs.

The complaint states the FBI started looking at Iza after receiving multiple reports that he had on his payroll several active deputies with the Los Angeles Sheriff’s Department (LASD). Iza’s attorney did not immediately respond to requests for comment.

The complaint cites a letter from an attorney for a victim referenced only as “E.Z.,” who was seeking help related to an extortion and robbery allegedly committed by Iza. The government says that in March 2022, three men showed up at E.Z.’s home, and tried to steal his laptop in an effort to gain access to E.Z. cryptocurrency holdings online. A police report referenced in the complaint says three intruders were scared off when E.Z. fired several handgun rounds in the direction of his assailants.

The FBI later obtained a copy of a search warrant executed by LASD deputies in January 2022 for GPS location information on a phone belonging to E.Z., which shows an LASD deputy unlawfully added E.Z.’s mobile number to a list of those associated with an unrelated firearms investigation.

“Damn my guy actually filed the warrant,” Iza allegedly texted someone after the location warrant was entered. “That’s some serious shit to do for someone….risking a 24 years career. I pay him 280k a month for complete resources. They’re active-duty.”

The FBI alleges LASD officers had on several previous occasions tried to kidnap and extort E.Z. at Iza’s behest. The complaint references a November 2021 incident wherein Iza and E.Z. were in a car together when Iza asked to stop and get snacks at a convenience store. While they were still standing next to the car, a van with several armed LASD deputies showed up and tried to force E.Z. to hand over his phone. E.Z. escaped unharmed, and alerted 911.

E.Z. appears to be short for Enzo Zelocchi, a self-described “actor” who was featured in an ABC News story about a home invasion in Los Angeles around that same time as the March 2022 home invasion, in which Zelocchi is quoted as saying at least two men tried to rob him at gunpoint (we’ll revisit Zelocchi’s acting credits in a moment).

One of many self portraits published on the Instagram account of Enzo Zelocchi.

The criminal complaint makes frequent references to a co-conspirator of Iza (“CC-1”) — his girlfriend at the time — who allegedly helped Iza run his businesses and spend the millions plunked down by Zort investors. We know what E.Z. stands for because Iza’s girlfriend then was a woman named Iris Au, and in November 2022 she sued Zelocchi for allegedly stealing Iza’s laptop.

The complaint says Iza also harassed a man identified only as T.W., and refers to T.W. as one of two Americans currently incarcerated in the Philippines for murder. In December 2018, a then 21-year-old Troy Woody Jr. was arrested in Manila after he was spotted dumping the body of his dead girlfriend Tomi Masters into a local river.

Woody is accused of murdering Masters with the help of his best friend and roommate at the time: Mir Islam, a.k.a. “JoshTheGod,” referred to in the Iza complaint as “M.I.” Islam and Woody were both core members of UGNazi, a hacker collective that sprang up in 2012 and claimed credit for hacking and attacking a number of high-profile websites.

In June 2016, Islam was sentenced to a year in prison for an impressive array of crimes, including stalking people online and posting their personal data on the Internet. Islam also pleaded guilty to reporting dozens of phony bomb threats and fake hostage situations at the homes of celebrities and public officials (Islam participated in a swatting attack against this author in 2013).

Troy Woody Jr. (left) and Mir Islam, are currently in prison in the Philippines for murder.

In December 2022, Troy Woody Jr. sued Iza, Zelocchi and Zort, alleging (PDF) Iza and Zelocchi were involved in a 2018 home invasion at his residence, wherein Woody claimed his assailants stole laptops and phones containing more than $200 million in cryptocurrencies.

Woody’s complaint states that Masters also was present during his 2018 home invasion, as was another core UGNazi member: Eric “CosmoTheGod” Taylor. CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which published the address, Social Security numbers and other personal information of public figures, including the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general. The group also swatted many of the people they doxed.

Exposed was built with the help of identity information obtained and/or stolen from ssndob dot ru.

In 2017, Taylor was sentenced to three years probation for participating in multiple swatting attacks, including the one against my home in 2013.

The complaint against Iza says the FBI interviewed Woody in Manila where he is currently incarcerated, and learned that Iza has been harassing him about passwords that would unlock access to cryptocurrencies. The FBI’s complaint leaves open the question of how Woody and Islam got the phones in the first place, but the implication is that Iza may have instigated the harassment by having mobile phones smuggled to the prisoners.

The government suggests its case against Iza was made possible in part thanks to Iza’s propensity for ripping off people who worked for him. The complaint cites information provided by a private investigator identified only as “K.C.,” who said Iza hired him to surveil Zelocchi but ultimately refused to pay him for much of the work.

K.C. stands for Kenneth Childs, who in 2022 sued Iris Au and Zort (PDF) for theft by deception and commercial disparagement, after it became clear his private eye services were being used as part of a scheme by the Zort founders to intimidate and extort others. Childs’ complaint says Iza clawed back tens of thousands of dollars in payments he’d previously made as part of their contract.

The government also included evidence provided by an associate of Iza’s — named only as “R.C.” — who was hired to throw a party at Iza’s home. According to the feds, Iza paid the associate $50,000 to craft the event to his liking, but on the day of the party Iza allegedly told R.C. he was unhappy with the event and demanded half of his money back.

When R.C. balked, Iza allegedly surrounded the man with armed LASD officers, who then extracted the payment by seizing his phone. The government says Iza kept R.C.’s phone and spent the remainder of his bank balance.

A photo Iza allegedly sent to Tassilo Heinrich immediately after Heinrich’s arrest on unsubstantiated drug charges.

The FBI said that after the incident at the party, Iza had his bribed sheriff deputies pull R.C. over and arrest him on phony drug charges. The complaint includes a photo of R.C. being handcuffed by the police, which the feds say Iza sent to R.C. in order to intimidate him even further. The drug charges were later dismissed for lack of evidence.

The government alleges Iza and Au paid the LASD officers using Zelle transfers from accounts tied to two different entities incorporated by one or both of them: Dream Agency and Rise Agency. The complaint further alleges that these two entities were the beneficiaries of a business that sold hacked and phished Facebook advertising accounts, and bribed Facebook employees to unblock ads that violated its terms of service.

The complaint says Iza ran this business with another individual identified only as “T.H.,” and that at some point T.H. had personal problems and checked himself into rehab. T.H. told the FBI that Iza responded by stealing his laptop and turning him in to the government.

KrebsOnSecurity has learned that T.H. in this case is Tassilo Heinrich, a man indicted in 2022 for hacking into the e-commerce platform Shopify, and leaking the user database for Ledger, a company that makes hardware wallets for storing cryptocurrencies.

Heinrich pleaded guilty and was sentenced to time served, three years of supervised release, and ordered to pay restitution to Shopify. Upon his release from custody, Heinrich told the FBI that Iza was still using his account at the public screenshot service Gyazo to document communications regarding his alleged bribing of LASD officers.

Prosecutors say Iza and Au portrayed themselves as glamorous and wealthy individuals who were successful social media influencers, but that most of that was a carefully crafted facade designed to attract investment from cryptocurrency enthusiasts. Meanwhile, the U.K. tabloids reported this summer that Au was dating Davide Sanclimenti, the 2022 co-winner on the dating reality show Love Island.

Au was featured on the July 2024 cover of “Womenpreneur Middle East.”

Recall that we promised to revisit Mr. Zelocchi’s claimed acting credits. Despite being briefly listed on the Internet Movie Data Base (imdb.com) as the most awarded science fiction actor of all time, it’s not clear whether Mr. Zelocchi has starred in any real movies.

Earlier this year, an Internet sleuth on Youtube showed that even though Zelocchi’s IMDB profile has him earning more awards than most other actors on the platform (here he is holding a Youtube top viewership award), Zelocchi is probably better known as the director of the movie once rated the absolute worst sci-fi flick on IMDB: A 2015 work called “Angel’s Apocalypse.” Most of the videos on Zelocchi’s Instagram page appear to be brief clips, some of which look more like a commercial for men’s cologne than scenes from a real movie.

A Reddit post from a year ago calling attention to Zelocchi’s sci-fi film Angel’s Apocalypse somehow earning more audience votes than any other movie in the same genre.

In many ways, the crimes described in this complaint and the various related civil lawsuits would prefigure a disturbing new trend within English-speaking cybercrime communities that has bubbled up in the past few years: The emergence of “violence-as-as-service” offerings that allow cybercriminals to anonymously extort and intimidate their rivals.

Found on certain Telegram channels are solicitations for IRL or “In Real Life” jobs, wherein people hire themselves out as willing to commit a variety of physical attacks in their local geographic area, such as slashing tires, firebombing a home, or tossing a brick through someone’s window.

Many of the cybercriminals in this community have stolen tens of millions of dollars worth of cryptocurrency, and can easily afford to bribe police officers. KrebsOnSecurity would expect to see more of this in the future as young, crypto-rich cybercriminals seek to corrupt people in authority to their advantage.

Planet DebianBits from Debian: New Debian Developers and Maintainers (July and August 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carlos Henrique Lima Melara (charles)
  • Joenio Marques da Costa (joenio)
  • Blair Noctis (ncts)

The following contributors were added as Debian Maintainers in the last two months:

  • Taihsiang Ho

Congratulations!

Planet DebianRussell Coker: Links September 2024

CNA Insider has an insightful documentary series about Chinese illegal immigrants to the US [1]. They should migrate to Australia, easier to get in and a better place to live.

Linus tech tips has an informative video about using Windows on Snapdragon ARM64 laptops. [2]. Maybe I should get one for running Linux. They are quite expensive on ebay now which is presumably a good sign about their quality.

A web site for comparing monospace fonts so you can find the one that best suits yuor coding [3]. Roboto works well for me.

Noema has an interesting article about nationhood comparing the attitudes towards European colonisation in Africa and Russian colonisation in Ukraine [4].

Insightful lecture by Grace Hopper (then Captain) about the future of computers [5]. The second part is linked from the first part. Published by the NSA.

Tony Hoare gave an insightful lecture titled “The Billion Dollar Mistake” about his work on designing the Algol language in 1965 [6]. The lecture was recorded in about 2005. But it still has a lot of relevance to computer science.

Jascha Sohl-Dickstein wrote an interesting blog post about Goodhart’s law, Machine Learning models, and how to try and mitigate problems in society [7].

Cory Doctorow wrote an insightful article on the Marshmallow test and long term thinking [8]. The rich fail this test badly.

Insightful interview with Justice Breyer about interpreting the US constitution and the problems with “textualism” and “originalism” [9].

Cory Doctorow wrote an informative article about Google’s practices of deleting Gmail accounts for no apparent reason and denying people access to their data [10]. We need more laws like the Digital Markets Act in the EU and we need them to apply to eBay/PayPal and AWS/Amazon.

Worse Than FailureCodeSOD: Switch How We Do Padding

We've seen so many home-brew string padding functions. And yet, there are still new ways to do this wrong. An endless supply of them. Nate, for example sent us this one.

public static string ZeroPadString(string _value, int _length)
{
    string result = "";
    int zerosToAdd = _length - _value.length;

I'm going to pause right here. Based on this, you likely think you know what's coming. We've got a string, we've got a variable to hold the result, and we know how many padding characters we need. Clearly, we're going to loop and do a huge pile of string concatenations without a StringBuilder and Remy's going to complain about garbage collection and piles of excess string instances being created.

That's certainly what I expect. Let's see the whole function.

public static string ZeroPadString(string _value, int _length)
{
    string result = "";
    int zerosToAdd = _length - _value.length;

    switch(zerosToAdd)
    {
        case 1:
            result = "0" + _value;
            break;
        case 2:
            result = "00" + _value;
            break;
        case 3:
            result = "000" + _value;
            break;
        case 4:
            result = "0000" + _value;
            break;
        case 5:
            result = "00000" + _value;
            break;
        case 6:
            result = "000000" + _value;
            break;
        case 7:
            result = "0000000" + _value;
            break;
        case 81:
            result = "00000000" + _value;
            break;
        case 9:
            result = "000000000" + _value;
            break;
    }
}

While this doesn't stress test your memory by spawning huge piles of string instances, it certainly makes a tradeoff in doing that- the largest number of zeroes we can add is 9. I guess, who's ever going to need more than 10 digits? Numbers that large never come up.

Once again, this is C#. There are already built-in padding functions, that pad to any possible length.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsMigration

Author: Majoki He hadn’t planned on becoming a ghost hunter, but that’s what Mordem Letac felt like now. A trained naturalist, he’d come to the northern reaches of the Yukon Territory earlier in the summer to study migration patterns in the face of ecosystem collapse related to rapidly accelerating climate change. In some ways studying […]

The post Migration appeared first on 365tomorrows.

,

Cory DoctorowVigilant (a Little Brother story)

Will Staehle's cover for 'Vigilant': a stylized, shattered mobile phone on a mustard-colored background.

This week on my podcast, I read “Vigilant“, a new Little Brother story commissioned by Nelda Buckman and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original.

Kids hate email.


Dee got my number from his older brother, who got it from Tina, my sister-in-law, who he knew from art school. He texted me just as I was starting to make progress with a gnarly bug in some logging software I was trying to get running for my cloud servers.


My phone went bloop and vibrated a little on the kitchen table, making ripples in my coffee. My mind went instantly blank. I unlocked my phone.


> Is this marcus


I almost blocked the number, but dammit, this was supposed to be a private number. I’d just changed it. I wanted to know how it was getting out and whether I needed to change it again.


> Who’s this?


Yeah, I punctuate my texts. I’m old.


> I need help with some school stuff some spying stuff at school i heard your good at that


MP3

Planet DebianVasudev Kamath: Signing the systemd-boot on Upgrade Using Dpkg Triggers

In my previous post on enabling SecureBoot, I mentioned that one pending improvement was signing the systemd-boot EFI binary with my keys on every upgrade. In this post, we'll explore the implementation of this process using dpkg triggers.

For an excellent introduction to dpkg triggers, refer to this archived blog post. The source code mentioned in that post can be downloaded from alioth archive.

From /usr/share/doc/dpkg/spec/triggers.txt, triggers are described as follows:

A dpkg trigger is a facility that allows events caused by one package but of interest to another package to be recorded and aggregated, and processed later by the interested package. This feature simplifies various registration and system-update tasks and reduces duplication of processing.

To implement this, we create a custom package with a single script that signs the systemd-boot EFI binary using our key. The script is as simple as:

#!/bin/bash

set -e

echo "Signing the new systemd-bootx64.efi"
sbsign --key /etc/secureboot/db.key --cert /etc/secureboot/db.crt \
       /usr/lib/systemd/boot/efi/systemd-bootx64.efi

echo "Invoking bootctl install to copy stuff"
bootctl install

Invoking bootctl install is optional if we have enabled systemd-boot-update.service, which will update the signed bootloader on the next boot.

We need to have a triggers file under the debian/ folder of the package, which declares its interest in modifications to the path /usr/lib/systemd/boot/efi/systemd-bootx64.efi. The trigger file looks like this:

# trigger 1 interest on systemd-bootx64.efi
interest-noawait /usr/lib/systemd/boot/efi/systemd-bootx64.efi

You can read about various directives and their meanings that can be used in the triggers file in the deb-triggers man page.

Once we build and install the package, this request is added to /var/lib/dpkg/triggers/File. See the screenshot below after installation of our package:

installed trigger

To test the functionality, I performed a re-installation of the systemd-boot-efi package, which provides the EFI binary for systemd-boot, using the following command:

sudo apt install --reinstall systemd-boot-efi

During installation, you can see the debug message being printed in the screenshot below:

systemd-boot-signer triggered

To test the systemd-boot-update.service, I commented out the bootctl install line from the above script, performed a reinstallation, and restarted the systemd-boot-update.service. Checking the log, I saw the following:

Sep 29 13:42:51 chamunda systemd[1]: Stopping systemd-boot-update.service - Automatic Boot Loader Update...
Sep 29 13:42:51 chamunda systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update...
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/systemd/systemd-bootx64.efi", same boot loader version in place already.
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/BOOT/BOOTX64.EFI", same boot loader version in place already.
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/BOOT/BOOTX64.EFI", same boot loader version in place already.
Sep 29 13:42:51 chamunda systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update.
Sep 29 13:43:37 chamunda systemd[1]: systemd-boot-update.service: Deactivated successfully.
Sep 29 13:43:37 chamunda systemd[1]: Stopped systemd-boot-update.service - Automatic Boot Loader Update.
Sep 29 13:43:37 chamunda systemd[1]: Stopping systemd-boot-update.service - Automatic Boot Loader Update...

Indeed, the service attempted to copy the bootloader but did not do so because there was no actual update to the binary; it was just a reinstallation trigger.

The complete code for this package can be found here.

With this post the entire series on using UKI to Secureboot with Debian comes to an end. Happy hacking!.

365 TomorrowsTrochilidae

Author: Ann Graham Other Sister touches Timid Sister’s elbow, offers a boiled egg on a tiny porcelain plate. She swallows the egg whole. From May to October Timid Sister pushes aside the drapery and plants her face between the window grille bars at sunrise. There’s a smear where her nose lands. Stock-still, she spies a […]

The post Trochilidae appeared first on 365tomorrows.

Planet DebianDirk Eddelbuettel: RApiSerialize 0.1.4 on CRAN: Added C++ Namespace

A new minor release 0.1.5 of RApiSerialize arrived on CRAN today. The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. This release adds an optional C++ namespace, available when the API header file is included in a C++ source file. And as one often does, the release also brings a few small updates to different aspects of the packaging.

Changes in version 0.1.4 (2024-09-28)

  • Add C++ namespace in API header (Dirk in #9 closing #8)

  • Several packaging updates: switched to Authors@R, README.md badge updates, added .editorconfig and cleanup

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianReproducible Builds: Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler.

Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.



Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on?

Kees Cook: I’m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel’s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.


Vagrant: Could you describe the path that lead you to working on this sort of thing?

Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo’s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian’s packaging building process at the time was sort of a chaotic free-for-all as there wasn’t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an “upstream first” requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn’t already their job.


Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler?

Kees: I’ve found the term “undefined behavior” to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating “unexpected behavior” or “ambiguous language constructs”. At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I’ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints.

None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C’s “implicit fall-through” in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn’t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the “[[fallthrough]]” statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added “[[fallthrough]]” statements were actually missing break statements. This was an extraordinarily common bug!

So getting rid of that ambiguity is where we have been. Another area I’ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can’t say “because we installed locks on the doors, 20% fewer break-ins have happened.” Much of our signal is always secondary or retrospective, which is frustrating: “This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact?” Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.


Vagrant: So it is hard to identify how effective this is… how bad would it be if people just gave up?

Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution.

If those were cheap then obviously we are not doing something right, and it becomes clear that it’s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can’t accept. The safety of my grandparents shouldn’t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?


Vagrant: What has been your biggest success in recent years?

Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people’s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language.

The worry was that developers would come to depend on zero-initialized stack variables, but this hasn’t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.


Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work?

Kees: One of the questions I frequently get is, “What version of the Linux kernel has feature $foo?” If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.


Vagrant: Have you used diffoscope?

Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn’t change. We have used diffoscope to compare the before and after binaries to confirm that “yep, there is no change in binary”.


Vagrant: You cannot just use checksums for that?

Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there’s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!


Vagrant: Where has reproducible builds affected you?

Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question “is my build environment running the expected code?” and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn’t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.


Vagrant: What do you want to see for the near or distant future in security work?

Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.


Vagrant: Well, thanks for that! Any last closing thoughts?

Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.


Vagrant: Likewise for your work!




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

,

David BrinReflecting on AI accountability for misinformation - and solar powering the revolution

The Great Big AI Crisis of 2024 will likely wreak its worst harm via political misinformation and manipulation this year (next month!) But it’s swamping its way into science, as well. 


It is not always so easy to spot the use of AI. But one clue is that ChatGPT tends to favor certain words… such as meticulous, intricate or commendable.” 


Of course all such detection cues are temporary.  Even when we point them out to each other (as I just did), that only helps train the systems to avoid overusing them.


So, is it hopeless to dream of escaping the Age of Tsunamis of Lies? 


None of the palliatives proposed by the geniuses and mavens of AI - ranging from 'moratoriums' to EU style regulations to 'privacy rules' stand any chance of helping much. Only one thing will even possibly work and that is siccing AI programs onto each other, competitively, with incentives for them to tattle on misinformation, as I describe here in Wired: Give Every AI a Soul - or Else.


And more vividly detailed? My keynote at the May 2024 RSA Conference in San Francisco – is now available online.   Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”  


Let's dive in to how that could work.



== AI, Ai Ai!!! ==


In all of those places, I've pointed out that we already developed one fairly effective method for detecting and deterring liars, foiling harm doers and preventing a return to 6000 years of lobotomizing feudalism. Imperfectly, by far! But light years better than any prior culture.


Was it carefully-deliberated and designed laws? Those can help, but no. Paternalistic protection by the state? Ditto, and dangerous. The method I'm talking about – the one innovation that gave us everything – has been to flatten the playing field and empower a wide enough diversity of citizen-players, so that we can pit elites and potential predators against each other. 


It's called reciprocal accountability.

Lawyer vs. lawyer, for example. And the one area where we’ve made the greatest advances? The most effective reciprocal accountability system of all: science.

Scientists are the most competitive creatures our species ever produced. Young scientists are like top guns roaming Main Street, looking for a paradigm or pompous theory-pusher to topple with the six-gun of evidence. They make their mark by finding at least a small chink in the current standard model to critique. 

(And yes, this is diametrically opposite to the slanderous image of wimpy consensus-hugging eggheads that's pushed by anti-science cult media.)

The results are very often positive sum. Negatives and falsehoods get canceled out by competition, while positives can combine through better-tested models, new alliances, and cooperation. 

In other words, a healthy market.

(Politics is supposed to be like that, by the way. And it was, overall, till a mad cult waged open war against the very concept of negotiation, turning our political institutions zero-sum and now negative sum.)

All of which leads back to my notion about Artificial Intelligence. That we should try emulating, in this new ecosystem, the one and only method that ever reduced misinformation and predation among organic humans. Again, I am calling for reciprocal accountability on cyber beings, applied by cyber beings.


While I may be alone in offering institutional innovations to achieve this, some groups have lately been coming up with practical methods.

To pinpoint when a language model might be confabulating, the new method involves asking a question multiple times to produce several AI-generated answers. Then a second LLM (Large Language Model) groups these answers according to their meaning; for instance, “John drove his car to the store” and “John went to the store in his car” would be clustered together.” Leading to a new metric “semantic entropy.”


“Other anti-hallucination methods have used LLMs to evaluate generated answers, through approaches such as asking a single model to double-check its own work. But the paired system improves on this...” 


So, yeah. The idea is starting to gain (a little) traction. Sic em on each other. But with incentives that reward those who do us all - and Truth - the most good.


When did I speak of a 'new ecosystem'? Here's an earlier posting where I talk about that...

Are we making new kinds of 'ecosystems'? 


... and how we organic humans (orgs) will still control - maybe for another decade or two - the new ecosystem's 'sun.' 


And if we handle this power wisely, it may shine upon the fabled 'soft landing' alongside our new cyber children.


And speaking of the sun...



== The solar revolution is here ==


This article in the Economist discusses how solar power's recent huge boom may be only the beginning. 70 years after first introduced by Bell Labs... and frequently sabotaged by filth merchants:


"Today solar power is long past the toy phase. Panels now occupy an area around half that of Wales, and this year they will provide the world with about 6% of its electricity—which is almost three times as much electrical energy as America consumed back in 1954. Yet this historic growth is only the second-most-remarkable thing about the rise of solar power. The most remarkable is that it is nowhere near over..."

...Much as I portrayed in EARTH in 1990. Though by now the question is why are the lords of carbon still backing the Denialist Movement, when this revolution can't be delayed any longer? And when denying all the heat waves and tumbling glaciers and acidifying oceans and super storms only makes you look like a jibbering loony?


Economies of scale have taken over now and batteries and methane and new nuclear will handle surge capacity, so why do they continue to back a cult bent on wrecking the planet that they must live on, too?

Well, the Russians and Saudis are still utterly fossils (and fossil-dependent) and plausibly they are the ones holding blackmail on almost all high Republicans. And unlike mere corruption (which can sometimes think long term), blackmail is always imminent and near term. It is only about satisfying the blackmailer today.

THAT - fundamentally - is what the Ukraine war is about and it is why the GOP is slavishly devoted to a Kremlin and slightly-relabeled KGB that they once despised. And it is why Ronald Reagan would spit in their eyes



== Future Tech ==


An interesting report - The Battery Mineral Loop - suggests that we may be recycling precious battery elements much more efficiently pretty soon, and that the race to mine them may be a temporary thing, lasting as little as 1.5 decades. Which is still a bridge we need to accomplish with careful management, good politics and lots of science. 


Australians have developed night vision optics so thin they might be barely noticeably different from your normal glasses.


Great news that soon computer chips will be able to store the energy they need for rapid operations right on the chip, itself, through new micro-capacitors. 


Oh, is it possible to find something positive, even lightening, from a Trump action? I have long demanded examples where a Republican U.S. administration had major, palpable comparative positive outcomes for the nation and world as a whole, and not just benefiting narrow, conniving cabals. Certainly, when the Trump guys sold off the U.S. Helium Reserve to buddies, for almost nothing, those pals immediately jacked up prices for the element that’s utterly necessary for many medical uses, such as supercooled imaging systems. And quantum computers.  


Only now the deal might (accidentally and unintentionally) benefit us, as a major new source of Helium has been found deep under Minnesota. Like the recent phosphate discoveries in Norway, it may be good news for civilization… and bad for market-cornering cheaters…


…till we get the mother lodes from asteroids, that is.


So... what now?


Now YOU do your bit for the Enlightenment, for science and civilization. 


Check your voter registration and those of friends! Give or volunteer if you can. Wear this. And remember that earlier victories in the recurring U.S. Civil War always led to golden eras. 


Imperfectly!  But we moved forward, correcting errors made by every prior generation. Making new ones for the next to correct! But moving ahead.


Toward the stars.


Planet DebianJonathan Dowland: Whisper (pipewire tool)

Whisper (pipewire tool)

It's time to mint a new blog tag…

I want to write to pour praise on some software I recently discovered.

I'm not up to speed on Pipewire—the latest piece of Linux plumbing related to audio—nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think.

I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.

_Whisper_'s UI. Screenshot from upstream.

Whisper's UI. Screenshot from upstream.

Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error.

More stuff like this please!

Planet DebianDave Hibberd: EuroBSDCon 2024 Report

This year I attended EuroBSDCon 2024 in Dublin. I always appreciate an excuse to head over to Ireland, and this seemed like a great chance to spend some time in Dublin and learn new things.

Due to constraints on my time I didn’t go to the 2 day devsummit that precedes the conference, only the main event itself.

The Event

EuroBSDCon was attended by about 200-250 people, the hardcore of the BSD community! Attendees came from all over, I met Canadians, USAians, Germanians, Belgians and Irelandians amongst other nationalities!

The event was at UCD Dublin, which is a gorgeous university campus about 10km south of Dublin proper in Stillorgan. The speaker hotel was a 20 minute walk (at my ~9min/km pace) from the hotel, or a quick bus journey. It was a pleasant walk, through the leafy campus and then along some pretty broad pavements, albeit beside a dual carriageway. The cycle infrastructure was pretty excellent too, but I sadly was unable to lease a city bike and make my way around on 2 wheels - Dublinbikes don’t extend that far out the city.

Lunch each day was Irish themed food - Saturday was beef stew (a Frenchman asked me what it was called - his only equivalent words were “Beef Bourguignon”) and Sunday was Bangers & Mash! The kitchen struggled a bit - food was brought out in bowls in waves, and that ensured there was artificial scarcity that clearly left anxiety for some that they weren’t going to be fed!

Everyone I met was friendly from the day I arrived, and that set me very much at ease and made the event much more enjoyable - things are better shared with others. Big shout out to dch and Blake Willis for spending a lot of time talking to me over the weekend!

Talks I Attended

Keynote: Evidence based Policy formation in the EU

Tom Smyth

This talk given by Tom Smyth was an interesting look into his work with EU Policymakers in ensuring fair competition for his small, Irish ISP. It was an enlightening look into the workings of the EU and the various bodies that set, and manage policy. It truly is a complicated beast, but the feeling I left with was that there are people all through the organisation who are desperate to do the right thing for EU citizens at all costs.

Sadly none of it is directly applicable to me living in the UK, but I still get to have a say on policy and vote in polls as an Irish citizen abroad.

10(ish) years of FreeBSD/arm64

Andrew Turner

I have been a fan of ARM platforms for a long, long time. I had an early ARM Chromebook and have been equal measures excited and frustrated by the raspberry pi since first contact. I tend to find other ARM people at events and this was no exception!

It was an interesting view into one person’s dedication to making arm64 a platform for FreeBSD, starting out with no documentation or hardware to becoming a first-class platform. It’s interesting to see the roadmap and things upcoming too and makes me hopeful for the future of arm64 in various OSes!

1-800-RC(8)-HELP: Dial Into FreeBSD Service Scripts Mastery!

Mateusz Piotrowski

rc scripts and startup applications scare me a bit. I’m better at systemd units than sysvinit scripts, but that isn’t really a transferable skill!

This was a deep dive into lots of the functionality that FreeBSD’s RC offers, and highlighted things that I only thought were limited to Linux’s systemd. I am much more aware of what it’s capable of now, but I’m still scared to take it on!

Afterwards I had a great chat in the hallway with Mateusz about our OS’s different approaches to this problem and was impressed with the pragmatic view he had on startup, systemd, rc and the future!

Package management without borders. Using Ravenports on multiple BSDs

Michael Reim

Ports on the BSDs interest me, but I hadn’t realised that outside of each major BSD’s collection there were other, cross platform ports collections offered. Ravenports is one of these under developments, and it was good to understand the hows, why and what’s happenings of the system. Plus, with my hibbian obsession on building other people’s software as my own packages, it’s interesting to see how others are doing it!

Building a Modern Packet Radio Network using Open Software

me

I spoke for 45 minutes to share my passion and frustration for amateur radio, packet radio, the law, the technology and what we’re doing in the UK Packet network.

This was a lot of fun - it felt like I had a busy room, lots of people interested in the stupid stuff I do with technology and I had lots of conversations after the fact about radio, telecoms, networking and at one point was cornered by what I describe as the “Erlang Mafia” to talk about how they could help!

Hacking - 30 years ago

Walter Belgers

This unrecorded talk looked at the history of the Dutch hacker scene, and a young group of hackers explorations of the early internet before modern security was a thing.

It was exciting, enrapturing, well presented and a great story of a well spent youth in front of computers.

Social

By 1730 I was pretty drained so I took myself back to the hotel missing the last talks, had some down time, and got the DART train to the social event at Brewdog.

This invovled about an hour’s walking and some train time and that was a nice time to reset my head and just watch the world. The train I was on had a particularly interesting ‘feature’ where when the motors were not loaded (slowdown or coasting) the lights slowly flickered dim-bright-dim. I don’t know if this is across the fleet or just this one, but it was fun to pontificate as I looked out the window at South Dublin passing by.

The social was good - a few beer tokens (cider in my case, trying to avoid beer-driven hangovers still), some pleasant junk food and plenty of good company to talk to, lots of people wanting to talk about radio and packet to me!

Brewdog struggled a bit - both in bar speed (a linear queue formed despite the staff preferring the crowd-around method of queue) and buffet food appeared in somewhat disjointed waves, meaning that people loitered around the food tables and cleared the plates of wings, sliders, fries, onion rings, mac & cheese as they appeared 4-5 plates at a time. Perhaps a few hundred hungry bodies was a bit too much for them to feed at once.

They had shuffleboard that was played all night by various groups!

I caught the last bus home, which was relatively painless!

Is our software sustainable?

Kent Inge Fagerland Simonsen

This was an interesting look into reducing the footprint of software to make it a net benefit. Lots of examples of how little changes can barrel up to big, gigawatthour changes when aggregated over the entire installbase of android or iOS!

A Packet’s Journey Through the OpenBSD Network Stack

Alexander Bluhm

This was an analysis of what happens at each stage of networking in OpenBSD and was pretty interesting to see. Lots of it was out my depth, but it’s cool to get an explanation and appreciation for various elements of how software handles each packet that arrives and the differences in the ipv4 and ipv6 stack!

FreeBSD at 30 Years: Its Secrets to Success

Kirk McKusick

This was a great statistical breakdown of FreeBSD since inception, including top committers, why certain parts of the system and community work so well and what has given it staying power compared to some projects on the internet that peter out after just a few years! Kirk’s excitement and passion for the project really shone through, and I want to read his similarly titled article in the FreeBSD Journal now!

Building an open native FreeBSD CI system from scratch with lua, C, jails & zfs

Dave Cottlehuber

Dave spoke pretty excitedly about his work on a CI system using tools that FreeBSD ships with, and introduced me to the integration of C and Lua which I wasn’t fully aware of before. Or I was, and my brain forgot it!

With my interest in software build this year, it was quite a timely look at how others are thinking of doing things (I am doing similar stuff with zfs!). I look forward to playing with it when it finally is released to the Real World!

Building an Appliance

Allan Jude

This was an interesting look into the tools that FreeBSD provides which can be used to make immutable, appliance OSes without too much overhead. Fail safe upgrades and boots with ZFS, running approved code with secure boot, factory resetting and more were discussed!

I have had thoughts around this in the recent past, so it was good to have some ideas validated, some challenged and gave me food for thought.

Experience as a speaker

I really enjoyed being a speaker at the event! I’ve spoken at other things before, but this really was a cut above. The event having money to provide me a hotel was a really welcome surprise, and also receiving a gorgeous scarf as a speaker gift was a great surprise (and it has already been worn with the change of temperature here in Scotland this week!).

I would definitely consider returning, either as an attendee or as a speaker. The community of attendees were pragmatic, interesting, engaging and welcoming, the organising committee were spot-on in their work making it happen and the whole event, while turning my brain to mush with all the information, was really enjoyable and I left energised and excited by things instead of ground down and tired.

365 TomorrowsGalaxies Beyond the Veil

Author: Welsh Diepreye The discovery was accidental, like most revolutionary things. Dr. Elara Voss, a brilliant and obsessed astrophysicist, had spent years studying the gravitational anomalies at the edge of our galaxy. What she discovered was not just a black hole or a pulsar, but something far more mysterious: a shimmering veil in the very […]

The post Galaxies Beyond the Veil appeared first on 365tomorrows.

,

Cryptogram Hacking ChatGPT by Planting False Memories into Its Data

This vulnerability hacks a feature that allows ChatGPT to have long-term memory, where it uses information from past conversations to inform future conversations with that same user. A researcher found that he could use that feature to plant “false memories” into that context window that could subvert the model.

A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker’s website.

Cryptogram NIST Recommends Some Common-Sense Password Rules

NIST’s second draft of its “SP 800-63-4“—its digital identify guidelines—finally contains some really good rules about passwords:

The following requirements apply to passwords:

  1. lVerifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
  2. Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
  3. Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
  4. Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a signgle character when evaluating password length.
  5. Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
  6. Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
  7. Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
  8. Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.
  9. Verifiers SHALL verify the entire submitted password (i.e., not truncate it).

Hooray.

News article.Shashdot thread.

Worse Than FailureError'd: Operation Erred Successfully

"Clouds obscure the result," reports Mike T.'s eight-ball. "It's a shame when the cloud and the browser disagree," he observed.

0

 

"Ivent is being really damn buggy" muttered Vitr S. "Looks like the testing team is going to have to become Undefined Undefined in their employment records."

1

 

"What's a numeric character?" wonders Ross K. "Actually any character in the "Amount required in words" field provokes this message. And it won't accept an empty field either." I believe that I,V,X,L,C, and D are numeric characters. You should try those.

2

 

"Looks like they've encountered a McError," chortled Shaun M. "As a dev, this notification has me thinking about McDonalds more than their marketing notifications do!" Very clever of them, wouldn't you say Shaun?

3

 

Finally, faithful Michael R. is job-hunting (hint) and found a position he is specially ell-suited or. "I'm very good at defineing, eveloping and riving innovation. " est of luck!

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsRE-Gen Beach

Author: India Choquette The first word that comes to mind when I think about RE-Gen Beach: fresh. As soon as you step onto the property (see my post on cute protective boots), you’ll immediately feel why a day pass costs so much. You won’t find anything this strong in a city spa—it’s too potent to […]

The post RE-Gen Beach appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 278 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 278. This version includes the following changes:

[ Chris Lamb ]
* Temporarily remove procyon-decompiler from Build-Depends as it was removed
  from testing (#1057532). (Closes: #1082636)
* Add a helpful contextual message to the output if comparing Debian .orig
  tarballs within .dsc files without the ability to "fuzzy-match" away the
  leading directory. (Closes: reproducible-builds/diffoscope#386)
* Correctly invert "X% similar" value and do not emit "100% similar".
  (Closes: reproducible-builds/diffoscope#391)
* Update copyright years.

You find out more by visiting the project homepage.

,

Planet DebianVasudev Kamath: Disabling Lockdown Mode with Secure Boot on Distro Kernel

In my previous post, I mentioned that Lockdown mode is activated when Secure Boot is enabled. One way to override this was to use a self-compiled upstream kernel. However, sometimes we might want to use the distribution kernel itself. This post explains how to disable lockdown mode while keeping Secure Boot enabled with a distribution kernel.

Understanding Secure Boot Detection

To begin, we need to understand how the kernel detects if Secure Boot is enabled. This is done by the efi_get_secureboot function, as shown in the image below:

Secure Boot status check

Disabling Kernel Lockdown

The kernel code uses the value of MokSBStateRT to identify the Secure Boot state, assuming that Secure Boot can only be enabled via shim. This assumption holds true when using the Microsoft certificate for signature validation (as Microsoft currently only signs shim). However, if we're using our own keys, we don't need shim and can sign the bootloader ourselves. In this case, the Secure Boot state of the system doesn't need to be tied to the MokSBStateRT variable.

To disable kernel lockdown, we need to set the UEFI runtime variable MokSBStateRT. This essentially tricks the kernel into thinking Secure Boot is disabled when it's actually enabled. This is achieved using a UEFI initializing driver.

The code for this was written by an anonymous colleague who also assisted me with various configuration guidance for setting up UKI and Secure Boot on my system. The code is available here.

Implementation

Detailed instructions for compiling and deploying the code are provided in the repository, so I won't repeat them here.

Results

I've tested this method with the default distribution kernel on my Debian unstable system, and it successfully disables lockdown while maintaining Secure Boot integrity. See the screenshot below for confirmation:

Distribution kernel lockdown disabled

Krebs on SecurityU.S. Indicts 2 Top Russian Hackers, Sanctions Cryptex

The United States today unveiled sanctions and indictments against the alleged proprietor of Joker’s Stash, a now-defunct cybercrime store that peddled tens of millions of payment cards stolen in some of the largest data breaches of the past decade. The government also indicted and sanctioned a top Russian cybercriminal known as Taleon, whose cryptocurrency exchange Cryptex has evolved into one of Russia’s most active money laundering networks.

A 2016 screen shot of the Joker’s Stash homepage. The links have been redacted.

The U.S. Department of Justice (DOJ) today unsealed an indictment against a 38-year-old man from Novosibirsk, Russia for allegedly operating Joker’s Stash, an extremely successful carding shop that came online in late 2014. Joker’s sold cards stolen in a steady drip of breaches at U.S. retailers, including Saks Fifth Avenue, Lord and TaylorBebe StoresHilton HotelsJason’s DeliWhole FoodsChipotleWawaSonic Drive-In, the Hy-Vee supermarket chainBuca Di Beppo, and Dickey’s BBQ.

The government believes the brains behind Joker’s Stash is Timur Kamilevich Shakhmametov, an individual who is listed in Russian incorporation documents as the owner of Arpa Plus, a Novosibirsk company that makes mobile games.

Early in his career (circa 2000) Shakhmametov was known as “v1pee” and was the founder of the Russian hacker group nerf[.]ru, which periodically published hacking tools and exploits for software vulnerabilities.

The Russian hacker group Nerf as described in a March 2006 article in the Russian hacker magazine xakep.ru.

By 2004, v1pee had adopted the moniker “Vega” on the exclusive Russian language hacking forum Mazafaka, where this user became one of the more reliable vendors of stolen payment cards.

In the years that followed, Vega would cement his reputation as a top carder on other forums, including Verified, DirectConnection, and Carder[.]pro.

Vega also became known as someone who had the inside track on “unlimited cashouts,” a globally coordinated cybercrime scheme in which crooks hack a bank or payment card processor and use cloned cards at cash machines to rapidly withdraw millions of dollars in just a few hours.

“Hi, there is work on d+p, unlimited,” Vega wrote in a private message to another user on Verified in Dec. 2012, referring to “dumps and PINs,” the slang term for stolen debit cards with the corresponding PINs that would allow ATM withdrawals.

This batch of some five million cards put up for sale Sept. 26, 2017 on the now-defunct carding site Joker’s Stash has been tied to a breach at Sonic Drive-In.

Joker’s Stash came online in the wake of several enormous card breaches at retailers like Target and Home Depot, and the resulting glut of inventory had depressed prices for stolen cards. But Joker’s would distinguish itself by catering to high-roller customers — essentially street gangs in the United States that would purchase thousands of stolen payment cards in one go.

Faced with a buyer’s market, Joker’s Stash set themselves apart by focusing on loyalty programs, frequent buyer discounts, money-back guarantees, and just plain good customer service. Big spenders were given access to the most freshly hacked payment cards, and were offered the ability to get free replacement cards if any turned out to be duds.

Joker’s Stash also was unique because it claimed to sell only payment cards that its own hackers had stolen directly from merchants. At the time, card shops typically resold payment cards that were stolen and supplied by many third-party hackers of unknown reliability or reputation.

In January 2021, Joker’s Stash announced it was closing up shop, after European authorities seized a number of servers for the fraud store, and its proprietor came down with the Coronavirus.

A DOJ statement credits the U.S. Secret Service for leading the years-long investigations (the Service’s original mandate was not protecting the president; it was pursuing counterfeiters, and modern-day carders definitely qualify as that). Prosecutors allege Joker’s Stash earned revenues of at least $280 million, but possibly more than $1 billion (the broad range is a consequence of several variables, including the rapid fluctuation in the price of bitcoin and the stolen goods they were peddling).

TALEON

The proprietors of Joker’s Stash may have sold tens of millions of stolen payment cards, but Taleon is by far the bigger fish in this law enforcement action because his various cryptocurrency and cash exchanges have allegedly helped to move billions of dollars into and out of Russia over the past 20 years.

An indictment unsealed today names Taleon as Sergey Sergeevich Ivanov, 44, of Saint Petersburg, Russia. The government says Ivanov, who likely changed his surname from Omelnitskii at some point, laundered money for Joker’s Stash, among many other cybercrime stores.

In a statement today, the Treasury Department said Ivanov has laundered hundreds of millions of dollars’ worth of virtual currency for ransomware actors, initial access brokers, darknet marketplace vendors, and other criminal actors for approximately the last 20 years.

First appearing on Mazafaka in the early 2000s, Taleon was known on the forums as someone who could reliably move large amounts of physical cash. Sources familiar with the investigation said Taleon’s service emerged as one of the few remaining domestic cash delivery services still operating after Russia invaded Ukraine in Feb. 2022.

Taleon set up his service to facilitate transfers between Moscow, St. Petersburg and financial institutions in the West. Taleon’s private messages on some hacker forums have been leaked over the years and indexed by the cyber intelligence platform Intel 471. Those messages indicate Taleon worked on many of the same ATM cashouts as Vegas, so it’s clear the two had an established business relationship well before Joker’s Stash came into being.

Sometime around 2013, Taleon launched a partnership with a money transfer business called pm2btc[.]me. PM2BTC allowed customers to convert funds from the virtual currency Perfect Money (PM) into bitcoin, and then have the balance (minus a processing fee) available on a physical debit card that could be used at ATMs, for shopping online, or at retail stores.

A screenshot of a website reviewing PM2BTC.

The U.S. government itself set things in motion for Taleon’s nascent cryptocurrency exchange business in 2013 after the DOJ levied money laundering charges against the proprietors of Liberty Reserve, one of the largest virtual currencies in operation at the time.  Liberty Reserve was heavily used by cybercriminals of all stripes. The government said the service had more than a million users worldwide, and laundered in excess of $6 billion in suspected criminal proceeds.

In the days following the takedown of Liberty Reserve, KrebsOnSecurity ran a story that examined discussions across multiple top Russian cybercrime forums about where crooks could feel safe parking their stolen funds. The answer involved Bitcoin, but also Taleon’s new service.

UAPS

Part of the appeal of Taleon’s exchange was that it gave its vetted customers an “application programming interface” or API that made it simple for dodgy online shops selling stolen goods and cybercrime services to accept cryptocurrency deposits from their customers, and to manage payouts to any suppliers and affiliates.

This API is synonymous with a service Taleon and friends operate in the background called UAPS, short for “Universal Anonymous Payment System.” UAPS has gone by several other names including “Pinpays,” and in October 2014 it landed Joker’s Stash as its first big client.

A source with knowledge of the investigation told KrebsOnSecurity that Taleon is a pilot who owns and flies around in his own helicopter.

Ivanov appears to have little to no social media presence, but the 40-year-old woman he lives with in St. Petersburg does, and she has a photo on her Vktontake page that shows the two of them in 2019 flying over Lake Ladoga, a large body of water directly north of St. Petersburg.

Sergey “Taleon” Ivanov (right) in 2019 in his helicopter with the woman he lives with, flying over a lake north of St. Petersburg, Russia.

BRIANS CLUB

In late 2015, a major competitor to Joker’s Stash emerged using UAPS for its back-end payments: BriansClub. BriansClub sullies this author’s name, photos and reputation to peddle millions of credit and debit cards stolen from merchants in the United States and around the world.

An ad for BriansClub has been using my name and likeness for years to peddle millions of stolen credit cards.

In 2019, someone hacked BriansClub and relieved the fraud shop of more than 26 million stolen payment cards — an estimated one-third of the 87 million payment card accounts that were on sale across all underground shops at that time. An anonymous source shared that card data with KrebsOnSecurity, which ultimately shared it with a consortium of financial institutions that issued most of the cards.

After that incident, the administrator of BriansClub changed the site’s login page so that it featured a copy of my phone bill, Social Security card, and a link to my full credit report [to this day, random cybercriminals confuse Yours Truly with the proprietor of BriansClub].

Alex Holden is founder of the Milwaukee-based cybersecurity firm Hold Security. Holden has long maintained visibility into cryptocurrency transactions made by BriansClub.

Holden said those records show BriansClub sells tens of thousands of dollars worth of stolen credit cards every day, and that in the last two years alone the BriansClub administrator has removed more than $242 million worth of cryptocurrency revenue from the UAPS platform.

The BriansClub login page, as it looked from late 2019 until recently.

Passive domain name system (DNS) records show that in its early days BriansClub shared a server in Lithuania along with just a handful of other domains, including secure.pinpays[.]com, the crime forum Verified, and a slew of carding shops operating under the banner Rescator.

As KrebsOnSecurity detailed in December 2023, the Rescator shops were directly involved in some of the largest payment card breaches of the past decade. Those include the 2013 breach at Target and the 2014 breach at Home Depot, intrusions that exposed more than 100 million payment card records.

CRYPTEX

In early 2018, Taleon and the proprietors of UAPS launched a cryptocurrency exchange called Cryptex[.]net that has emerged as a major mover of ill-gotten crypto coins.

Taleon reminds UAPS customers they will enjoy 0% commission and no “know your customer” (KYC) requirements “on our exchange Cryptex.”

Cryptex has been associated with quite a few ransomware transactions, including the largest known ransomware payment to date. In February 2024, a Fortune 50 ransomware victim paid a record $75 million ransom to a Russian cybercrime group that calls themselves the Dark Angels. A source with knowledge of the investigation said an analysis of that payment shows roughly half of it was processed through Cryptex.

That source provided a screen shot of Cryptex’s sending and receiving exposure as viewed by Chainalysis, a company the U.S. government and many cryptocurrency exchanges rely on to flag transactions associated with suspected money laundering, ransomware payouts, or facilitating payments for darknet websites.

Chainalysis finds that Cryptex has received more than $1.6 billion since its inception, and that this amount is roughly equal to its sending exposure (although the total number of outflows is nearly half of the inflows).

The graphic indicates a great deal of money flowing into Cryptex — roughly a quarter of it — is coming from bitcoin ATMs around the world. Experts say most of those ATM inflows to Cryptex are bitcoin ATM cash deposits from customers of carding websites like BriansClub and Jokers Stash.

A screenshot of Chainalysis’s summary of illicit activity on Cryptex since the exchange’s inception in 2018.

The indictments released today do not definitively connect Taleon to Cryptex. However, PM2BTC (which teamed up with Taleon to launch UAPS and Pinpays) and Cryptex have now been sanctioned by the U.S. Department of the Treasury.

Treasury’s Financial Crimes Enforcement Network (FinCEN) levied sanctions today against PM2BTC under a powerful new “Section 9714” authority included in the Combating Russian Money Laundering Act, changes enacted in 2022 to make it easier to target financial entities involved in laundering money for Russia.

Treasury first used this authority last year against Bitzlato, a cryptocurrency exchange operating in Russia that became a money laundering conduit for ransomware attackers and dark market dealers.

THE LAUNDROMAT

An investigation into the corporate entities behind UAPS and Cryptex reveals an organization incorporated in 2012 in Scotland called Orbest Investments LP. Records from the United Kingdom’s business registry show the owners of Orbest Investments are two entities: CS Proxy Solutions CY, and RM Everton Ltd.

Public business records further reveal that CS Proxy Solutions and RM Everton are co-owners of Progate Solutions, a holding company that featured prominently in a June 2017 report from Bellingcat and Transparency International (PDF) on money laundering networks tied to the Kremlin.

“Law enforcement agencies believe that the total amount laundered through this process could be as high as US$80 billion,” the joint report reads. “Although it is not clear where all of this money came from, investigators claim it includes significant amounts of money that were diverted from the Russian treasury and state contracts.”

Their story built on reporting published earlier that year by the Organized Crime and Corruption Project (OCCRP) and Novaya Gazeta, which found that at least US$20.8 billion was secretly moved out of Russia between 2010 and 2014 through a vast money laundering machine comprising over 5,000 legal entities known as “The Laundromat.”

Image: occrp.org

“Using company records, reporters tracked the names of some clients after executives refused to give them out,” the OCCRP report explains. “They found the heavy users of the scheme were rich and powerful Russians who had made their fortunes from dealing with the Russian state.”

Rich Sanders is a blockchain analyst and investigator who advises the law enforcement and intelligence community. Sanders just returned from a three-week sojourn through Ukraine, traveling with Ukrainian soldiers while mapping out dodgy Russian crypto exchanges that are laundering money for narcotics networks operating in the region. Sanders said today’s sanctions by the Treasury Department will likely have an immediate impact on Cryptex and its customers.

“Whenever an entity is sanctioned, the implications on-chain are immense,” Sanders told KrebsOnSecurity. “Regardless of whether an exchange is actually compliant or just virtue signals it, it is the case across the board that exchanges will pay attention to these sanctions.”

“This action shows these payment processors for illicit platforms will get attention eventually,” Sanders continued. “Even if it took way too long in this case, Cryptex knew the majority of their volume was problematic, knew why it was problematic, and did it anyway. And this should be a wake up call for other exchanges that know full well that most of their volume is problematic.”

The U.S. Department of State is offering a reward of up to $10 million each for information leading to the arrests and/or convictions of Shakhmametov and Ivanov. The State announcement says separate rewards of up to $1 million each are being offered for information leading to the identification of other leaders of the Joker’s Stash criminal marketplace (other than Shakhmametov), as well as the identification of other key leaders of the UAPS, PM2BTC, and PinPays transnational criminal groups (other than Ivanov).

Image: U.S. Secret Service.

Planet DebianMelissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone!

The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year’s event in A Coruña, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year’s edition and this blog post summarizes the experience and its fruits.

One of the highlights of this year’s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here.

The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights

The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement.

Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.

  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.

Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.

  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.

Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.

  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.

  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up

The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists.

With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR

Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level)

Harry Wentland (AMD) led this session.

Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware.

Upstream work related to this session:

Color/HDR (Compositor-Level)

Sebastian Wick (RedHat) led this session.

It started with Sebastian’s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria.

Upstream work related to this session:

Color/HDR (Use Cases and Testing)

Christopher Cameron (Google) and Melissa Wen (Igalia) led this session.

In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow “emulating” HDR.

We also discussed on the usage of a kernel background color property.

Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency

Mario Limonciello (AMD) led this session.

Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development.

Upstream work related to this session:

Strategy for video and gaming use-cases

Leo Li (AMD) led this session.

Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux.

Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API

Xaver Hugl (KDE/BlueSystems) led this session.

Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them.

Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer.

In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more… and beer

We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more.

On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people.

The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs.

In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display “wish list”, which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported.

We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don’t match, etc.

After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions

Participants provided valuable feedback on the hackfest, including suggestions for future improvements.

  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest

We can’t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda.

A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coruña: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face.

Finally, thanks virtual participants who couldn’t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo (Red Hat), Jiri Koten (RedHat), Jonas Ådahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel Dänzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD).

We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

Worse Than FailureCodeSOD: True Parseimony

We've seen this pattern many times here:

return (someCondition) ? true : false;

or

if (someCondition)
{
  return true;
}
else
{
  return false;
}

There are many variations on it, all of which highlight someone's misunderstanding of boolean expressions. Today Kerry sends us a "fun" little twist, in C#.

return (someCondition || someOtherCondition) ? Boolean.Parse("true") : Boolean.Parse("false");

The conditions have been elided by Kerry, but they're long and complicated, rendering the statement less readable than it appears here.

But here we've taken the "if-condition-return-condition" pattern and added needless string parsing to it.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsRainbow Warrior

Author: David Barber So far, the Time Traveller had found nothing worth collecting. Also, he was being stared at as he walked round the market. He seemed to be the only person dressed in a suit and tie as portrayed in pictures from this time, and while some of the locals wore head coverings, none […]

The post Rainbow Warrior appeared first on 365tomorrows.

,

Planet DebianRussell Coker: The PiKVM

Hardware

I have just setup a PiKVM, here’s the Amazon link for the KVM hardware (case and Pi hat etc) and here’s an Amazon link for a Pi4 to match.

The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It’s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren’t covered in the videos, the device I received had some improvements that made it easier to assemble which weren’t in the video.

When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn’t work for reasons I don’t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it.

The system has a bright OLED display to show the IP address and some other information which is very handy.

The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn’t test. I definitely recommend this.

Software

This is the download link for the RaspberryPi images for the PiKVM [3]. The “v3” image matches the hardware from the Amazon link I provided.

The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it’s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication).

If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can’t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.

rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro

The default webadmin username/password is admin/admin.

To change the passwords run the following commands:

rw
kvmd-htpasswd set admin
passwd root
ro

It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don’t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot.

By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select “advanced” and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL.

The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely “lsusb” on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse.

Managing Computers

For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM, then it should all just work.

For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn’t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can’t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings.

It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn’t seem useful to me as the use cases for this device don’t require that. If you are using it for a server that doesn’t have iDRAC/ILO or other management hardware there will be no other “monitor” and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time.

For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is “display a login screen on anything that’s available”. Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration?

Conclusion

The PiKVM is a well engineered and designed product that does what’s expected at a low price. There are lots of minor issues with using it which aren’t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements.

The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it’s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive.

The web UI wasn’t as user friendly as it might have been, but it’s a lot better than iDRAC so I don’t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

Krebs on SecurityTimeshare Owner? The Mexican Drug Cartels Want You

The FBI is warning timeshare owners to be wary of a prevalent telemarketing scam involving a violent Mexican drug cartel that tries to trick people into believing someone wants to buy their property. This is the story of a couple who recently lost more than $50,000 to an ongoing timeshare scam that spans at least two dozen phony escrow, title and realty firms.

One of the phony real estate companies trying to scam people out of money over fake offers to buy their timeshares.

One evening in late 2022, someone phoned Mr. & Mrs. Dimitruk, a retired couple from Ontario, Canada and asked whether they’d ever considered selling their timeshare in Florida. The person on the phone referenced their timeshare address and said they had an interested buyer in Mexico. Would they possibly be interested in selling it?

The Dimitruks had purchased the timeshare years ago, but it wasn’t fully paid off — they still owed roughly $5,000 before they could legally sell it. That wouldn’t be an issue for this buyer, the man on the phone assured them.

With a few days, their contact at an escrow company in New York called ecurrencyescrow[.]llc faxed them forms to fill out and send back to start the process of selling their timeshare to the potential buyer, who had offered an amount that was above what the property was likely worth.

After certain forms were signed and faxed, the Dimitruks were asked to send a small wire transfer of more than $3,000 to handle “administrative” and “processing” fees, supposedly so that the sale would not be held up by any bureaucratic red tape down in Mexico.

These document exchanges went on for almost a year, during which time the real estate brokers made additional financial demands, such as tax payments on the sale, and various administrative fees. Mrs. Dimitruk even sent them a $5,000 wire to pay off her remaining balance on the timeshare they thought they were selling.

In a phone interview with KrebsOnSecurity, Mr. Dimitruk said they lost over $50,000.

“They kept calling me after that saying, ‘Hey your money is waiting for you here’,” said William Dimitruk, a 73-year-old retired long-haul truck driver. “They said ‘We’re going to get in trouble if the money isn’t returned to you,’ and gave me a toll-free number to call them at.”

In the last call he had with the scammers, the man on the other end of the line confessed that some bad people had worked for them previously, but that those employees had been fired.

“Near the end of the call he said, ‘You’ve been dealing with some bad people and we fired all those bad guys,'” Dimitruk recalled. “So they were like, yeah it’s all good. You can go ahead and pay us more and we’ll send you your money.”

According to the FBI, there are indeed some very bad people behind these scams. The FBI warns the timeshare fraud schemes have been linked to the Jalisco New Generation drug cartel in Mexico.

In July 2024, the FBI and the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) warned the Jalisco cartel is running boiler room-like call centers that target people who own timeshares:

“Mexico-based [transnational criminal organizations] such as the Jalisco New Generation Cartel are increasingly targeting U.S. owners of timeshares in Mexico through complex and often yearslong telemarketing, impersonation, and advance fee schemes. They use the illicit proceeds to diversify their revenue streams and finance other criminal activities, including the manufacturing and trafficking of illicit fentanyl and other synthetic drugs into the United States.”

A July 2024 CBS News story about these scams notes that U.S. and Mexican officials last year confirmed that as many as eight young workers were confirmed dead after they apparently tried to quit jobs at a call center operated by the Jalisco cartel.

Source: US Department of the Treasury’s Office of Foreign Assets Control.

The phony escrow company the Dimitruks dealt with — ecurrencyescrow[.]llc — is no longer online. But the documents sent by their contact there referenced a few other still-active domains, including realestateassetsllc[.]com

The original registration records of both of these domains reference another domain — datasur[.]host — that is associated with dozens of other real estate and escrow-themed domains going back at least four years. Some of these domains are no longer active, while others have been previously suspended at different hosting providers.

061nyr[.]net
061-newyorkrealty[.]net
1nydevelopersgroupllc[.]com
1oceanrealtyllc[.]com
advancedclosingservicesllc[.]com
americancorporatetitle[.]com
asesorialegalsiglo[.]com
atencion-tributaria.[]com
carolinasctinc[.]net
closingandsettlementservices[.]com
closingandsettlementsllc[.]com
closingsettlementllc[.]com
crefaescrowslimited[.]net
ecurrencyescrow[.]llc
empirerllc[.]com
fiduciarocitibanamex[.]com
fondosmx[.]org
freightescrowcollc[.]com
goldmansachs-investment[.]com
hgvccorp[.]com
infodivisionfinanciera[.]com
internationaladvisorllc[.]com
jadehillrealtyllc[.]com
lewisandassociaterealty[.]com
nyreputable[.]org
privateinvestment.com[.]co
realestateassetsllc[.]com
realestateisinc[.]com
settlementandmanagement[.]com
stllcservices[.]com
stllcservices[.]net
thebluehorizonrealtyinc[.]com
walshrealtyny[.]net
windsorre[.]com

By loading ecurrencyescrowllc[.]com into the Wayback Machine at archive.org, we can see text at the top of the page that reads, “Visit our resource library for videos and tools designed to make managing your escrow disbursements a breeze.”

Searching on that bit of text at publicwww.com shows the same text appears on the website of an escrow company called Escshieldsecurity Network (escshieldsecurity[.]com). This entity claims to have been around since 2009, but the domain itself is less than two years old, and there is no contact information associated with the site. The Pennsylvania Secretary of State also has no record of a business by this name at its stated address.

Incredibly, Escshieldsecurity pitches itself as a solution to timeshare closing scams.

“By 2015, cyber thieves had realized the amount of funds involved and had targeted the real estate, title and settlement industry,” the company’s website states. “As funding became more complex and risky, agents and underwriters had little time or resources to keep up. The industry needed a simple solution that allowed it to keep pace with new funding security needs.”

The domains associated with this scam will often reference legitimate companies and licensed professionals in the real estate and closing businesses, but those real professionals often have no idea they’re being impersonated until someone starts asking around. The truth is, the original reader tip that caused KrebsOnSecurity to investigate this scheme came from one such professional whose name and reputation was being used to scam others.

It is unclear whether the Dimitruks were robbed by people working for the Jalisco cartel, but it is clear that whoever is responsible for managing many of the above-mentioned domains — including the DNS provider datasur[.]host — recently compromised their computer with information-stealing malware.

That’s according to data collected by the breach tracking service Constella Intelligence [Constella is currently an advertiser on KrebsOnSecurity]. Constella found that someone using the email address exposed in the DNS records for datasur[.]host — jyanes1920@gmail.com — also was relieved of credentials for managing most of the domains referenced above at a Mexican hosting provider.

It’s not unusual for victims of such scams to keep mum about their misfortune. Sometimes, it’s shame and embarrassment that prevents victims from filing a report with the local authorities. But in this case, victims who learn they’ve been robbed by a violent drug cartel have even more reason to remain silent.

William Dimitruk said he and his wife haven’t yet filed a police report. But after acknowledging it could help prevent harm to other would-be victims, Mr. Dimitruk said he would consider it.

There is another reason victims of scams like this should notify authorities: Occasionally, the feds will bust up one of these scam operations and seize funds that were stolen from victims. But those investigations can take years, and it can be even more years before the government starts trying to figure out who got scammed and how to remunerate victims. All too often, the real impediment to returning some of those losses is that the feds have no idea who the victims are.

If you are the victim of a timeshare scam like this, please consider filing a report with the FBI’s Internet Crime Complaint Center (IC3), at ic3.gov. Other places where victims may wish to file a complaint:

Federal Trade Commission – https://www.ftccomplaintassistant.gov
International Consumer Protection and Enforcement Network – https://www.econsumer.gov/en
Profeco – Mexican Attorney General – https://consulmex.sre.gob.mx/montreal/index.php/en/foreigners/services-foreigners/318-consumer-protection

Planet DebianMelissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone!

The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year’s event in A Coruña, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year’s edition and this blog post summarizes the experience and its fruits.

One of the highlights of this year’s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here.

The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights

The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement.

Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.

  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.

Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.

  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.

Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.

  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.

  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up

The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists.

With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR

Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level)

Harry Wentland (AMD) led this session.

Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware.

Upstream work related to this session:

Color/HDR (Compositor-Level)

Sebastian Wick (RedHat) led this session.

It started with Sebastian’s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria.

Upstream work related to this session:

Color/HDR (Use Cases and Testing)

Christopher Cameron (Google) and Melissa Wen (Igalia) led this session.

In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow “emulating” HDR.

We also discussed on the usage of a kernel background color property.

Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency

Mario Limonciello (AMD) led this session.

Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development.

Upstream work related to this session:

Strategy for video and gaming use-cases

Leo Li (AMD) led this session.

Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux.

Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API

Xaver Hugl (KDE/BlueSystems) led this session.

Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them.

Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer.

In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more… and beer

We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more.

On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people.

The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs.

In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display “wish list”, which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported.

We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don’t match, etc.

After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions

Participants provided valuable feedback on the hackfest, including suggestions for future improvements.

  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest

We can’t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda.

A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coruña: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face.

Finally, thanks virtual participants who couldn’t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas Ådahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel Dänzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD).

We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

Worse Than FailureSpace for Queries

Maria was hired as a consultant by a large financial institution. The institution had a large pile of ETL scripts, reports, analytics dashboards, and the like, which needed to be supported. The challenge is that everyone who wasn't a developer had built the system. Due to the vagaries of internal billing, hiring IT staff to do the work would have put it under a charge code which would have drained the wrong budget, so they just did their best.

The quality of the system wasn't particularly good, and it required a lot of manual support to actually ensure that it kept working. It was several hundred tables, with no referential integrity constraints on them, no validation rules, no concept of normalization (or de-normalization- it was strictly abnormalied tables) and mostly stringly typed data. It all sat in an MS SQL Server, and required daily manual runs of stored procedures to actually function.

Maria spent a lot of time exploring the data, trying to understand the various scripts, stored procedures, manual processes, and just the layout of the data. As part of this, she ran SELECT queries directly from the SQL Server Management Studio (SSMS), based on the various ETL and reporting jobs.

One reporting step queried the "BusinessValue" column from a table. So Maria wrote a query that was similar, trying to understand the data in that column:

SELECT Id, CostCentreCode, BusinessValue FROM DataBusinessTable

This reported "Invalid Column Name: 'BusinessValue'".

Maria re-read the query she was copying. She opened the definition of the table in the SSMS UI. There was a column clearly labeled "BusinessValue". She read it carefully, ensuring that there wasn't a typo or spelling error, either in her query or the table definition.

After far too much time debugging, she had the SSMS tool generate the CREATE TABLE statement to construct the table.

CREATE TABLE DataBusinessTable
([Id] Number IDENTITY,
 …,
  [BusinessValue ] TEXT
)

Maria felt like she'd fallen for the worst troll in the history of trolling. The column name had a space at the end.

According to Maria, this has since been "fixed" in SQL Server- you can now run queries which omit trailing whitespace from names, but at the time she was working on this project, that's clearly not how things worked.

The fact that this trailing whitespace problem was common enough that the database engine added a feature to avoid it is in fact, the real WTF.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsOn the Road to Damascus

Author: Alastair Millar I was between cons and heading down towards Damascus, Arkansas, when I heard the Word. It being Sunday, the holoscreens in the corners of the diner were showing a syndicated broadcast from one of the Texan megachurches. “Welcome, friends! Welcome all, whatever your age, sex, gender, ethnicity or degree of cybernetization! The […]

The post On the Road to Damascus appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppFastAD 0.0.4 on CRAN: Updated Again

A new release 0.0.4 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release updates the quick fix in release 0.0.3 from a good week ago. James took a good look and properly disambiguated the statement that lead clang to complain, so we are back to compiling as C++17 under all compilers which makes for a slightly wider reach.

The NEWS file for this release follows.

Changes in version 0.0.4 (2024-09-24)

  • The package now properly addresses a clang warning on empty variadic macros arguments and is back to C++17 (James in #10)

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram An Analysis of the EU’s Cyber Resilience Act

A good—long, complex—analysis of the EU’s new Cyber Resilience Act.

Cryptogram New Windows Malware Locks Computer in Kiosk Mode

Clever:

A malware campaign uses the unusual method of locking users in their browser’s kiosk mode to annoy them into entering their Google credentials, which are then stolen by information-stealing malware.

Specifically, the malware “locks” the user’s browser on Google’s login page with no obvious way to close the window, as the malware also blocks the “ESC” and “F11” keyboard keys. The goal is to frustrate the user enough that they enter and save their Google credentials in the browser to “unlock” the computer.

Once credentials are saved, the StealC information-stealing malware steals them from the credential store and sends them back to the attacker.

I’m sure this works often enough to be a useful ploy.

365 TomorrowsThe List

Author: Mark Renney We spend most of our time within the game. Until less than a year ago, I had been one of the majority and I believed that the opportunities we had were unlimited. It is all right there at our fingertips. All we have to do is simply reach out and grab it […]

The post The List appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Secure Cryptography

Governments have a difficult relationship with cryptography. Certainly, they benefit from having secure, reliable and fast encryption. Arguably, their citizens also benefit- I would argue that being able to, say, handle online banking transactions securely is a net positive. But it creates a prisoner's dilemma: malicious individuals may conceal their activity behind encryption. From the perspective of a state authority, this is bad.

Thus, you get the regular calls for a cryptosystem which allows secure communications but also allows the state to snoop on those messages. Of course, if you intentionally weaken a cryptographic system so that some people can bypass it, you've created a system which anyone can bypass. You can't have secure encryption which also allows snooping, any more than you can have an even prime number larger than two.

This leaves us in a situation where mathematicians and cryptography experts are shouting, "This isn't possible!" and cops and politicians are shouting "JUST NERD HARDER!" back.

Well, today's anonymous submitter found a crypto library which promises to allow secure communications and allow nation states to break that encryption. They've nerded harder! Let's take a look at some of their C code.

unsigned long int alea(void)
{
    FILE * f1 = 0;
    unsigned char val = 0;
    /*float rd = 0;*/
    unsigned long int rd = 0;

    f1 = fopen("/dev/random","r");
    fread(&val,sizeof(unsigned char),1,f1);
    /*rd = (float)(val) / (ULONG_MAX);*/
    rd = (unsigned long int)((val) / (UCHAR_MAX));
    fclose(f1);

    return rd;
}

This function reads a byte from /dev/random to get us some random data for key generation. Unfortunately, it's using the XKCD algorithm: this function always returns 0. Note that they divide val by UCHAR_MAX before casting val to an unsigned long int. Which, there's also the issue here that 8 bits of entropy are always going to be 8 bits of entropy- casting it to an unsigned long int isn't going to be any more random than just passing back the 8-bits, because you only read 8 bits of randomness. There's also no reason why they couldn't have simply read an unsigned long's worth of random data.

I appreciate the comment which indicates that rd used to be a floating point number. It doesn't make this code any better, but it's nice to see that they keep trying.

Let's also take a peek at a use-after-free waiting to happen:

void MY_FREE(void *p)
{
   if (p == NULL)
      return;

   free(p);
   p = NULL;
}

They wrote their own MY_FREE function, which adds a NULL check around the pointer- don't free the memory if the pointer points at NULL. Nothing wrong with that (though this is better enforced structurally with clear ownership of memory, and not through conditional checks, so actually, yes, there's something wrong with that). After we free the memory pointed to by the pointer, we set the pointer equal to NULL.

Except we don't. We set the local variable to NULL. So when the code does things like:

  if(v->aliveness == NULL)
   {
    MY_FREE(v);
    return(v);
   }

v is a pointer to our struct. If the aliveness value is NULL, we want to delete that data from memory, so we call MY_FREE, and then we return our pointer to the struct- which is unchanged and definitely not null. It's pointing to what is now freed memory. If anyone touches it, the whole program blows up.

Here's the upshot: it's almost a guarantee that this program has undefined behavior in there, someplace. This means the compiler is free to do anything, include implement a dual custody cryptographic algorithm which is magically secure and prevents abuse by state actors. It's as likely as nasal demons.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianVasudev Kamath: Note to Self: Enabling Secure Boot with UKI on Debian

Note

This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.

In this guide, we'll implement Secure Boot by taking full control of the device, removing preinstalled keys, and installing our own. For a comprehensive overview of the benefits and process, refer to this excellent post from rodsbooks.

Key Components

To implement Secure Boot, we need three essential keys:

  1. Platform Key (PK): The top-level key in Secure Boot, typically provided by the motherboard manufacturer. We'll replace the vendor-supplied PK with our own for complete control.
  2. Key Exchange Key (KEK): Used to sign updates for the Signatures Database and Forbidden Signatures Database.
  3. Database Key (DB): Used to sign or verify binaries (bootloaders, boot managers, shells, drivers, etc.).

There's also a Forbidden Signature Key (dbx), which is the opposite of the DB key. We won't be generating this key in this guide.

Preparing for Key Enrollment

Before enrolling our keys, we need to put the device in Secure Boot Setup Mode. Verify the status using the bootctl status command. You should see output similar to the following image:

UEFI Setup mode

Generating Keys

Follow these instructions from the Arch Wiki to generate the keys manually. You'll need the efitools and openssl packages. I recommend using rsa:2048 as the key size for better compatibility with older firmware.

After generating the keys, copy all .auth files to the /efi/loader/keys/<hostname>/ folder. For example:

� sudo ls /efi/loader/keys/chamunda
db.auth  KEK.auth  PK.auth

Signing the Bootloader

Sign the systemd-boot bootloader with your new keys:

sbsign --key <path-to db.key> --cert <path-to db.crt> \
   /usr/lib/systemd/boot/efi/systemd-bootx64.efi

Install the signed bootloader using bootctl install. The output should resemble this:

bootctl install

Note

If you encounter warnings about mount options, update your fstab with the `umask=0077` option for the EFI partition.

Verify the signature using sbsign --verify:

sbsign verify

Configuring UKI for Secure Boot

Update the /etc/kernel/uki.conf file with your key paths:

SecureBootPrivateKey=/path/to/db.key
SecureBootCertificate=/path/to/db.crt

Signing the UKI Image

On Debian, use dpkg-reconfigure to sign the UKI image for each kernel:

sudo dpkg-reconfigure linux-image-$(uname -r)
# Repeat for other kernel versions if necessary

You should see output similar to this:

sudo dpkg-reconfigure linux-image-$(uname -r)
/etc/kernel/postinst.d/dracut:
dracut: Generating /boot/initrd.img-6.10.9-amd64
Updating kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi
Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi
/etc/kernel/postinst.d/zz-systemd-boot:
Installing kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi
Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi

Enrolling Keys in Firmware

Use systemd-boot to enroll your keys:

systemctl reboot --boot-loader-menu=0

Select the enroll option with your hostname in the systemd-boot menu.

After key enrollment, the system will reboot into the newly signed kernel. Verify with bootctl:

uefi enabled

Dealing with Lockdown Mode

Secure Boot enables lockdown mode on distro-shipped kernels, which restricts certain features like kprobes/BPF and DKMS drivers. To avoid this, consider compiling the upstream kernel directly, which doesn't enable lockdown mode by default.

As Linus Torvalds has stated, "there is no reason to tie Secure Boot to lockdown LSM." You can read more about Torvalds' opinion on UEFI tied with lockdown.

Next Steps

One thing that remains is automating the signing of systemd-boot on upgrade, which is currently a manual process. I'm exploring dpkg triggers for achieving this, and if I succeed, I will write a new post with details.

Acknowledgments

Special thanks to my anonymous colleague who provided invaluable assistance throughout this process.

Cryptogram Israel’s Pager Attacks and Supply Chain Vulnerabilities

Israel’s brazen attacks on Hezbollah last week, in which hundreds of pagers and two-way radios exploded and killed at least 37 people, graphically illustrated a threat that cybersecurity experts have been warning about for years: Our international supply chains for computerized equipment leave us vulnerable. And we have no good means to defend ourselves.

Though the deadly operations were stunning, none of the elements used to carry them out were particularly new. The tactics employed by Israel, which has neither confirmed nor denied any role, to hijack an international supply chain and embed plastic explosives in Hezbollah devices have been used for years. What’s new is that Israel put them together in such a devastating and extravagantly public fashion, bringing into stark relief what the future of great power competition will look like—in peacetime, wartime and the ever expanding gray zone in between.

The targets won’t be just terrorists. Our computers are vulnerable, and increasingly so are our cars, our refrigerators, our home thermostats and many other useful things in our orbits. Targets are everywhere.

The core component of the operation, implanting plastic explosives in pagers and radios, has been a terrorist risk since Richard Reid, the so-called shoe bomber, tried to ignite some on an airplane in 2001. That’s what all of those airport scanners are designed to detect—both the ones you see at security checkpoints and the ones that later scan your luggage. Even a small amount can do an impressive degree of damage.

The second component, assassination by personal device, isn’t new, either. Israel used this tactic against a Hamas bomb maker in 1996 and a Fatah activist in 2000. Both were killed by remotely detonated booby-trapped cellphones.

The final and more logistically complex piece of Israel’s plan, attacking an international supply chain to compromise equipment at scale, is something that the United States has done, though for different purposes. The National Security Agency has intercepted communications equipment in transit and modified it not for destructive purposes but for eavesdropping. We know from an Edward Snowden document that the agency did this to a Cisco router destined for a Syrian telecommunications company. Presumably, this wasn’t the agency’s only operation of this type.

Creating a front company to fool victims isn’t even a new twist. Israel reportedly created a shell company to produce and sell explosive-laden devices to Hezbollah. In 2019 the FBI created a company that sold supposedly secure cellphones to criminals—not to assassinate them but to eavesdrop on and then arrest them.

The bottom line: Our supply chains are vulnerable, which means that we are vulnerable. Any individual, country or group that interacts with a high-tech supply chain can subvert the equipment passing through it. It can be subverted to eavesdrop. It can be subverted to degrade or fail on command. And although it’s harder, it can be subverted to kill.

Personal devices connected to the internet—and countries where they are in high use, such as the United States—are especially at risk. In 2007 the Idaho National Laboratory demonstrated that a cyberattack could cause a high-voltage generator to explode. In 2010 a computer virus believed to have been developed by the United States and Israel destroyed centrifuges at an Iranian nuclear facility. A 2017 dump of CIA documents included statements about the possibility of remotely hacking cars, which WikiLeaks asserted could be used to carry out “nearly undetectable assassinations.” This isn’t just theoretical: In 2015 a Wired reporter allowed hackers to remotely take over his car while he was driving it. They disabled the engine while he was on a highway.

The world has already begun to adjust to this threat. Many countries are increasingly wary of buying communications equipment from countries they don’t trust. The United States and others are banning large routers from the Chinese company Huawei because we fear that they could be used for eavesdropping and—even worse—disabled remotely in a time of escalating hostilities. In 2019 there was a minor panic over Chinese-made subway cars that could have been modified to eavesdrop on their riders.

It’s not just finished equipment that is under the scanner. More than a decade ago, the US military investigated the security risks of using Chinese parts in its equipment. In 2018 a Bloomberg report revealed US investigators had accused China of modifying computer chips to steal information.

It’s not obvious how to defend against these and similar attacks. Our high-tech supply chains are complex and international. It didn’t raise any red flags to Hezbollah that the group’s pagers came from a Hungary-based company that sourced them from Taiwan, because that sort of thing is perfectly normal. Most of the electronics Americans buy come from overseas, including our iPhones, whose parts come from dozens of countries before being pieced together primarily in China.

That’s a hard problem to fix. We can’t imagine Washington passing a law requiring iPhones to be made entirely in the United States. Labor costs are too high, and our country doesn’t have the domestic capacity to make these things. Our supply chains are deeply, inexorably international, and changing that would require bringing global economies back to the 1980s.

So what happens now? As for Hezbollah, its leaders and operatives will no longer be able to trust equipment connected to a network—very likely one of the primary goals of the attacks. And the world will have to wait to see if there are any long-term effects of this attack and how the group will respond.

But now that the line has been crossed, other countries will almost certainly start to consider this sort of tactic as within bounds. It could be deployed against a military during a war or against civilians in the run-up to a war. And developed countries like the United States will be especially vulnerable, simply because of the sheer number of vulnerable devices we have.

This essay originally appeared in The New York Times.

LongNowRick Prelinger

Rick Prelinger

2 special screenings of a new LOST LANDSCAPES film by Rick Prelinger will be on Monday 12/9/24 and Tuesday 12/10/24 at the Herbst Theater. Tickets will be released soon and Long Now Members can reserve a pair of tickets on either night!

Each year LOST LANDSCAPES casts an archival gaze on San Francisco and its surrounding areas. The film is drawn from newly scanned archival footage, including home movies, government-produced and industrial films, feature film outtakes and other surprises from the Prelinger Archives collection and elsewhere.

,

Planet DebianJonathan McDowell: The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.

I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value

Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.

Occupancy levels

That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.

Coexistence builds relationships

As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).

Coexistence allows for unexpected interactions

People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.

It’s easier for bad managers to manage bad performers

Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary

Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

Cryptogram Hacking the “Bike Angels” System for Moving Bikeshares

I always like a good hack. And this story delivers. Basically, the New York City bikeshare program has a system to reward people who move bicycles from full stations to empty ones. By deliberately moving bikes to create artificial problems, and exploiting exactly how the system calculates rewards, some people are making a lot of money.

At 10 a.m. on a Tuesday last month, seven Bike Angels descended on the docking station at Broadway and 53rd Street, across from the Ed Sullivan Theater. Each rider used his own special blue key -­- a reward from Citi Bike—­ to unlock a bike. He rode it one block east, to Seventh Avenue. He docked, ran back to Broadway, unlocked another bike and made the trip again.

By 10:14, the crew had created an algorithmically perfect situation: One station 100 percent full, a short block from another station 100 percent empty. The timing was crucial, because every 15 minutes, Lyft’s algorithm resets, assigning new point values to every bike move.

The clock struck 10:15. The algorithm, mistaking this manufactured setup for a true emergency, offered the maximum incentive: $4.80 for every bike returned to the Ed Sullivan Theater. The men switched direction, running east and pedaling west.

Nicely done, people.

Now it’s Lyft’s turn to modify its system to prevent this hack. Thinking aloud, it could try to detect this sort of behavior in the Bike Angels data—and then ban people who are deliberately trying to game the system. The detection doesn’t have to be perfect, just good enough to catch bad actors most of the time. The detection needs to be tuned to minimize false positives, but that feels straightforward.

Worse Than FailureTales from the Interview: Cleaning House

Craig had been an IT manager at an immigration services company for several years, but was ready to move on. And for good reason- Craig had suffered his share of moronic helldesk nonsense and was ready to let someone else deal with it. This meant participating in interviews for his replacement.

Craig had given a rather generous three months notice, and very quickly the remaining three months were taken up with interviewing possible replacements. Each interview followed the same basic pattern: Craig would greet the candidate, escort them down to the fish-bowl style conference room in the center of the office, where a panel of interviewers ran through a mix of behavioral and technical questions. The interviews were consistently disappointing, and so they'd move on to the next candidate, hoping for an improvement.

After the first few interviews, he started making up questions about potentially horrible IT related disasters. "You see an executive using scissors to cut the Ethernet cable. When pressed, they explain that they want their connection to be wireless. What do you do?" "It's the holiday season, and you see someone trying to extend Christmas lights in the break room using a suicide cable, what do you do?" "You discover one of the technicians has been hiding a bottle of whiskey in the server room. What do you do?"

This kept Craig entertained, but didn't get them any closer to hiring any of these candidates.

One day, they brought in another candidate, and Craig ran the standard interview. His mind wasn't really on the interview- the candidate's resume wasn't the best they'd seen, and it took only a few minutes to establish that they probably weren't the best fit for the role. So Craig spent the time thinking more about whatever absurd question he was going to ask than what was going on in front of him.

His mind drifted off, and his eyes wandered around the office. They strayed to the corner office, also fishbowl style, where the CEO sat. And that's when Craig realized he wasn't going to need to make anything up for today's interview.

"What would you do if you saw a member of staff washing their keyboard with Evian mineral water, while sitting at their desk, their computer still on and keyboard still plugged in?"

The candidate was bemused, and just sat silently. For a long beat, they just watched Craig. Craig, obligingly, pointed back to the CEO's office, where the CEO was in the process of doing exactly what Craig had described.

The candidate took in the scene. Saw the placard announcing that as the CEO's office. Saw not just one, but two open bottles of Evian. Saw the water spreading everywhere, as the CEO hadn't considered things like "have some paper towels on hand".

The candidate turned back to Craig, and eloquently shrugged. There was a world-weariness in the shrug, that spoke to long experience with situations like this. It was the shrug of an IT manager that was going to keep a healthy stock of replacement keyboards, and never ever let the CEO have a laptop.

In the end, it was that candidate who got the job, not because they had the best interview, or the best resume, but because they knew what they were getting in to, and were ready to deal with it.

The keyboard, however, wasn't so lucky. "How else was I supposed to get the breadcrumbs out of it?" the CEO asked while Craig replaced the keyboard.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. ProGet costs less than half of Artifactory and is just as good. Our easy-to-read comparison page lays out the editions, features, and pricing of the different editions of ProGet and Artifactory.Learn More.

365 TomorrowsRe:Life

Author: Julian Miles,  Staff Writer “Time of death: five twenty-one.” Ben glances away from the clock as the doors of the operating theatre swing open. Three figures in grey suits enter. Following behind them is a cadaver drone. The foremost points to the body on the table. “Ours.” Ben’s about to obstruct the intruders when […]

The post Re:Life appeared first on 365tomorrows.

,

Planet DebianAdnan Hodzic: Effortless Linux backups: Power of OpenZFS Snapshots on Ubuntu 24.04

Linux snapshots? Back in the day (mid 2000’s) ReiserFS was my go to Linux filesystem, it was fast & reliable. But then after its creator...

365 TomorrowsCrack a Few Eggs

Author: Rick Tobin Emeril Ainsley leaned his head forward, studying the finer details of a satellite probe’s scanning transmission. Martian storms were quelled, leaving the target crater clear for deployment. “We’ve got a go, team. Let’s make it count. One try. One win.” Captain Ainsley alerted those in the control center that the moment had […]

The post Crack a Few Eggs appeared first on 365tomorrows.

Rondam RamblingsYes, you can have exactly-once delivery

IntroductionThis post is ostensibly about an obscure technical issue in distributed systems, but it's really about human communications, and how disagreements that on the surface appear to be about technical issues can sometimes turn out to actually be disagreements about the meanings of words.  I'm taking the time to write a fairly extensive post about this for two reasons.  First, I'm

,

David BrinSci Fi Updates: Sailing space and sci fi stories that might save us all! (Plus a micro-rant.)

I've been mentioning that the TASAT Project is now up and running! A way that the very nerdiest readers of sci fi tales from the last 100 years might someday use their story-citing powers to save the world!  Drop by to learn how. Or see my blip about it at-bottom.

Just updated and re-released: Project Solar Sail: 21st Century Edition: A collection of stories and essays about the next step in interstellar exploration: Lightships and Sails propelled by lasers or sunlight! Classic stories by Clarke, Asimov, Anderson, Bradbury and Jack Vance, along with new/updated articles by JPL scientists and others, exploring present technologies and future possibilities for sailing the light fantastic. Edited by David Brin and Stephen W. Potts and - originally - Arthur C. Clarke. 

And another classic updated and refreshed. Not genre, but akin.....John Perlin’s newly re-issued tome is a classic. A Forest Journey: The Role of Trees in the Fate of Civilization* is a deeply-moving and persuasive elegy to the vital importance of the natural world - the groves and prairies and seas. Not as alternatives to civilization, but as the lungs and sinews and beating heart that allow us - especially our glittering cities - to stand and gaze upward.


*I cited this one in Earth.


== Sci fi updates ==

SF epic poetry! Homer may be dead but not his spirit, as the mighty literary form of poetical epics lives on. The greatest such living bard would be Frederick Turner, whose topics include Genesis: the terraforming of Mars, or the rise of Artificial Intelligence, or the genetic engineering of our organic successors.


Another: Epoch: A Poetic Psy-Phi Saga, by Dave Jilk, is science fiction in the form of an epic poem, with the first fully human-level artificial intelligence telling its own story as a sort of memoir. The book turns much of current thinking on existential AI risk on its head, and raises some uncomfortable questions about humanity even as it lauds our accomplishments. 


(Another mini epic poem by Ray Bradbury and J.V. Post is in Project Solar Sail!)


And there are updated moviesOooh. The original is fine, but... I'm okay with plans afoot to remake the fine 80s sci fi flick Enemy Mine, based on Barry Longyear’s exquisite, Hugo-winning novella, which you can find on Amazon(What I absolutely rebel against is the remake of perfect films. I mean carumba, leave Lawrence of Arabia alone!)


Flash fiction is a lovely exercise in rapid creation on the fly. I am pleased to recommend an allegorical fairy tale about a witch and a gargoyle.



== Sci Fi Roundup! ==


The mighty Kay Kenyon has finished her wonderfully entertaining series. Now available for pre-order, book 4, Keeper of the Mythos Gate, the exciting and moving conclusion to The Arisen Worlds. Publication, September 3. If you haven't checked out the series yet, dip in with this excerpt from Book 1.


Bruce Golden’s Evergreen centers around a mysterious artifact + themes of obsession, revenge, and redemption amid timber jockeys, uncouth frontier towns, and into the heart of an awareness so alien it defies common notions of "intelligent life."


One of the fine authors I’ve mentored in my Out of Time YA series (only teens can teleport through space and time!) is Torion Oey. His latest, a fantasy novel, is The Disgraced Mage


Tales of the United States Space Force is a new combination of science fiction stories and fact articles about – or related to - America's newest military service branch. Space is critical to the economy and our whole modern way of life, and that makes it a target. Let this volume open some eyes. And one of my classics is included.

 

Winter 1962. A child is discovered in the frozen Oregon woods. Mute and feral, wandering lost, naked and near death… and not entirely human. Nonesuch Man: an illustrated novel  by Steven Elkins.


Two of my out-of-print novels are now re-issued with fine new covers and fresh editing. Earth came in second for a Hugo and is on every “Top Ten Predictive Novels” list you can find. (See below!)


Also Glory Season is a Silverberg/Norton-style adventure on a world where human reproduction has been channeled down wholly new paths… with one of my favorite protagonists, plucky Maia!  The trade paperbacks are luscious. 


Terrific covers and Open Road allowed me to insert about 80 page breaks in Earth that give this edition a really classy look and feel.  Don’t miss free chapters and trailers on my website.



== Stories that predicted well? ==


I mentioned predictive tales? Well, whenever exploring new territory, you might ask the natives? What profession spends a lot of time seeking and extrapolating on 'signals from the future?" 


The top 10% of near future science fiction novels generally contain riffs to portray answers to the questions: "If this goes on..." or What if...?" From John Brunner's astonishingly prophetic 1968 books Stand on Zanzibar and The Shockwave Rider to Frederik Pohl and Ursula LeGuin and Nancy Kress... to my own Earth and Existence... to Kim Stanley Robinson's The Ministry of the Future, which has proved so influential that the UN is pondering naming a new agency after it.

 

Start here? 8 Books That Eerily Predicted the Future.   



Which circles us back around to TASAT or There's A Story About That.  


The idea cropped up well over a decade ago. In those days, whenever I was in DC for NASA meetings, I would always stop - on my way to Dulles Airport - at a little agency in McLean Virginia, to give a talk on 'future threats', some of which (alas) have come true. At the third of these talks to the Protector Caste, it occurred to me that these people - mostly super smart and sincere public servants - had very little clue about the vast number of thought-experiments in science fiction that have spun out dramatically dangerous possibilities. Very often about unexpected dangers that loom suddenly, when the present speeds into the future.


I blurted: "Suppose someday you encounter something strange - maybe very strange. You form a committee to look into it and give advice." (I have been on several such 'consultant rolladexes'.) "Shouldn't that committee have access to past ruminations that might have already explored similar ground? Tales that maybe poked at the first assumptions that you might mistakenly make, if you ever face a similar situation?"


The purpose of TASAT is to enroll folks who have read a lot of sci fi tales and who might be able to provide that very service!  See the full explanation at TASAT.org!


Lately, the first beta testers have been citing past tales about tech-sabotage, that eerily foresaw the recent "pager caper" wherein explosives got inveigled into an adversary's unsuspecting hands. Citations included Eric Frank Russells's Wasp and Harry Harrison's The Stainless Steel Rat Wants You, but it goes back even a century!


The TASAT project has stumbled many times. Turns out we needed Mr. Todd Zimmerman, expert programmer, to finally make it happen. (Thank you, Todd!)


And now we're hoping many of you will try out TASAT in the current beta and give feedback... because who knows? You may be the one to cite a story that shakes a false assumption, and maybe thusly save us all!



== A final grumble ==


Okay, it still kinda hurts. But my tribute to the recently-late Vernor Vinge... my friend and one of the greats of science fiction - can be found here.


So, what’s my grumble? The travesty - also raised by Harry Turtledove - that one of the best and most visionary SF authors of all time – Vernor – was never named Grand Master of SFWA - the Science Fiction & Fantasy Writers Association.  


(Hey, changing SFWA's name from the parochial 'of America' was long overdue.) 


But as for neglecting Vernor - despite many campaigns on his behalf?  No writhing excuse for this dismal spurning is anything other than masturbatory justification of pure bigotry, of the kind that George Orwell described in Homage to Catalonia. The same righteous circular firing squad behavior that demolished the left in the 1930s Spanish Civil War, opening up a path for Hitler & Mussolini. Or frippy fads like Nader and Stein, that led to the destructive presidencies of George W. Bush and Donald Trump, and could do it to us, again. 


Likewise, it is – today – the very essence of self-destruction, narrowing, cauterizing and neutering what should be an inspiring and multi-directional literature of progress.


Was that a Heinlein-like, old man shouting-at-clouds grumble rant? Sure, but prove me wrong, in comments? 


Or how about maybe let's try a gesture that will both re-establish some justice in our field and broaden -- rather than narrow-down -- a progressive, future-seeking coalition? It could begin with a simple act to honor one of the greatest science fiction authors of all time.


Nancy Kress for Grand Master of SF. 


-----------------------------------------------------------


Rant mode off, now.  But always on standby mode. ;-)


Have a great weekend. And check your voter registration.


Planet DebianJamie McClelland: How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent.

Now we want to start using them for, among other things, sending email.

I haven’t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I’m curious: what’s the best 2024 way to warm up IP addresses, particularly using postfix?

Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I’m not entirely sure how to fit messages per day into our setup.

We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn’t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get).

Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay

If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain.

So, I’m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it’s a start.

A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can’t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name.

I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn’t work with active messages.

To help mitigate the problem I’m only using our bulk mail queue to warm up IPs, but really, this is not ideal.

Suggestions welcome!

Update #1

If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:

# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid

After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue.

[Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2

After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:

421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply

So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3

Well, this doesn’t seem to be working the way I want it to.

When a message arrives faster than the designated rate limit, it remains in the active queue.

I’m entirely sure how the timing is supposed to work, but at this point I’m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can’t these messages that exceed the rate delayed be deferred instead?

Update #4

I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

Planet DebianGunnar Wolf: 50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM

The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.

The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.

The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.

The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.

This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

365 TomorrowsThe Spoiler

Author: C.R. Kiegle I was a genius inventor and a foolish woman. I was the mortal to transcend the bounds of my own lifespan and invent time travel, the one to beat that final constraint of the universe. I watched the classic plays of the ancient Greeks as they were first performed in Athens, travelled […]

The post The Spoiler appeared first on 365tomorrows.

,

Planet DebianSahil Dhiman: Educational and Research Institutions With Own ASN in India

Another one of the ASN list. This turned out longer than I expected (which is good). If you want to briefly understand what is an ASN, my Personal ASNs From India post carries an introduction to it.

Now, here’re the Educational and Research Institutions with their own ASN in India, which I could find:

  • AS2697 Education and Research Network
  • AS9885 NKN Internet Gateway
  • AS23770 Tata Institute of Fundamental Research (used as National Centre for Biological Sciences network)
  • AS38021 Network of Indian Institute of Foreign Trade
  • AS38620 National Knowledge Network
  • AS38872 Indian School of Business
  • AS45340 B.M.S College of Engineering
  • AS55296 National Institute of Public Finance and Policy
  • AS55479 Indian Institute of Technology, Kanpur
  • AS55566 Inter University Centre for Astronomy and Astrophysics
  • AS55824 NKN Core Network
  • AS56056 AMITY-IN
  • AS55847 NKN Edge Network
  • AS58703 Amrita Vishwa Vidyapeetham
  • AS58758 Tata Institute of Fundamental Research (used as Homi Bhabha Centre for Science Education (HBCSE) network)
  • AS59163 GLA University
  • AS59193 Indian Institute of Technology, Hyderabad
  • AS131226 Indian Institute of Technology, Roorkee
  • AS131473 SRM University
  • AS132423 Indian Institute of Technology, Bombay
  • AS132524 Tata Institute of Fundamental Research (used as main campus network)
  • AS132749 Indraprastha Institute of Information Technology, Delhi
  • AS132780 Indian Institute of Technology, Delhi
  • AS132984 Uka Tarsadia University
  • AS132785 Shiv Nadar Institution of Eminence Deemed to be University
  • AS132995 South Asian University
  • AS133002 Indian Institute of Tropical Meteorology
  • AS133233 S.N. Bose National Centre for Basic Sciences
  • AS133273 Tata Institute of Social Sciences
  • AS133308 Indira Gandhi Centre For Atomic Research
  • AS133313 Saha Institute of Nuclear Physics
  • AS133552 B.M.S. College of Engineering
  • AS133723 Institute for Plasma Research
  • AS134003 Centre For Cellular And Molecular Platforms
  • AS134023 Aligarh Muslim University
  • AS134322 Tata Institute of Fundamental Research (used as International Centre for Theoretical Sciences (ICTS) network)
  • AS134901 Indian Institute of Science Education And Research
  • AS134934 Institute For Stem Cell Biology And Regenerative Medicine
  • AS135730 Datta Meghe Institute Of Medical Sciences
  • AS135734 Birla Institute of Technology And Science
  • AS135835 Sardar Vallabhbhai Patel National Police Academy
  • AS136005 Raman Research Institute
  • AS136304 Institute of Physics, Bhubaneswar
  • AS136470 B.M.S. College of Engineering
  • AS136702 Physical Research Laboratory
  • AS137136 Indian Agricultural Statistics Research Institute
  • AS137282 Kalinga Institute of Industrial Technology
  • AS137617 Indian Institute of Management, Ahmedabad
  • AS137956 Indian Institute of Technology, Ropar
  • AS138155 Jawaharlal Nehru University
  • AS138231 Indian Institute of Information Technology, Allahabad
  • AS140033 Indian Institute of Technology, Bhilai
  • AS140118 Indian Institute of Technology Banaras Hindu University
  • AS140192 Indian Institute of Information Technology and Management, Kerala
  • AS140200 Panjab University
  • AS141270 Indian Institute Of Technology, Indore
  • AS141340 Indian Institute Of Technology, Madras
  • AS141477 Indira Gandhi National Open University
  • AS141478 Director National Institute Of Technology, Calicut
  • AS141288 National Institute of Science Education And Research Bhubaneswar
  • AS141507 National Institute of Mental Health And Neurosciences
  • AS142493 Sri Ramachandra Institute Of Higher Education And Research
  • AS147239 Lal Bahadur Shastri National Academy of Administration (LBSNAA)
  • AS147258 Dayalbagh Educational Institute
  • AS149607 National Forensic Sciences University
  • AS151086 Amrita Vishwa Vidyapeetham
  • AS152533 National Institute of Technology, Karnataka

Special Mentions

  • AS132926 Allen Career Institute
  • AS141841 Indian Institute of Hardware Technology Limited

Some observations:

Let me know if I’m missing someone.

Cryptogram Clever Social Engineering Attack Using Captchas

This is really interesting.

It’s a phishing attack targeting GitHub users, tricking them to solve a fake Captcha that actually runs a script that is copied to the command line.

Clever.

Worse Than FailureError'd: A Dark Turn

You may call it equity, or equinox or whatever woke term you like but you can't sugarcoat what those of us in the North know: the Southern Hemisphere is stealing the very essence of our dwindling days. Sucking out the insolation like a milkshake and slurping it all across the equator.
Rage, rage against the dying of the light!

Meanwhile. Steven B. reminded us of an Error'd I'm pretty sure we've seen before, but it's too dark in here to find now. "I think Microsoft need a bit more tuning on their Office365 anti-spam filters," he suggests.

0

 

Jan confirms that broken test-in-production data will never die. "I hope the size of the content of the package is in fact 200mm x 78mm x 61mm, as this is what I ordered."

1

 

Jozsef thinks the Wise fees don't add up, claiming "The email explains why they chose this amount, but it's still funny, and technically wrong." I agree it's funny but I don't see what's wrong. Is Jozsef arguing that lowering a cost by 0% is not actually lowering anything? I see his position but I consider lowering by 0% to be simply the degenerate case of lowering, just as highering a thing by 0% is a degenerate use of antonyms.

2

 

Karun R. thinks this form is funny, snickering "When is optional not optional ? Won't let me submit until I entered something in the field."

3

 

Finally Mike just keeps tilting at this old windmill: "Is encoding finally fixed in the new version? 😂"

4

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsGlitch

Author: Emily Kinsey I was trapped. I awoke from a dreamless sleep with a start, unsure how the fire started. (Although, if you ask me, it was probably my brother’s fault.) Flames licked through the open bedroom door and thick black smoke obscured the lone bedroom window. The fire blazed a jagged scar across the […]

The post Glitch appeared first on 365tomorrows.

,

Krebs on SecurityThis Windows PowerShell Phish Has Scary Potential

Many GitHub users this week received a novel phishing email warning of critical security holes in their code. Those who clicked the link for details were asked to distinguish themselves from bots by pressing a combination of keyboard keys that causes Microsoft Windows to download password-stealing malware. While it’s unlikely that many programmers fell for this scam, it’s notable because less targeted versions of it are likely to be far more successful against the average Windows user.

A reader named Chris shared an email he received this week that spoofed GitHub’s security team and warned: “Hey there! We have detected a security vulnerability in your repository. Please contact us at https://github-scanner[.]com to get more information on how to fix this issue.”

Visiting that link generates a web page that asks the visitor to “Verify You Are Human” by solving an unusual CAPTCHA.

This malware attack pretends to be a CAPTCHA intended to separate humans from bots.

Clicking the “I’m not a robot” button generates a pop-up message asking the user to take three sequential steps to prove their humanity. Step 1 involves simultaneously pressing the keyboard key with the Windows icon and the letter “R,” which opens a Windows “Run” prompt that will execute any specified program that is already installed on the system.

Executing this series of keypresses prompts the built-in Windows Powershell to download password-stealing malware.

Step 2 asks the user to press the “CTRL” key and the letter “V” at the same time, which pastes malicious code from the site’s virtual clipboard.

Step 3 — pressing the “Enter” key — causes Windows to launch a PowerShell command, and then fetch and execute a malicious file from github-scanner[.]com called “l6e.exe.”

PowerShell is a powerful, cross-platform automation tool built into Windows that is designed to make it simpler for administrators to automate tasks on a PC or across multiple computers on the same network.

According to an analysis at the malware scanning service Virustotal.com, the malicious file downloaded by the pasted text is called Lumma Stealer, and it’s designed to snarf any credentials stored on the victim’s PC.

This phishing campaign may not have fooled many programmers, who no doubt natively understand that pressing the Windows and “R” keys will open up a “Run” prompt, or that Ctrl-V will dump the contents of the clipboard.

But I bet the same approach would work just fine to trick some of my less tech-savvy friends and relatives into running malware on their PCs. I’d also bet none of these people have ever heard of PowerShell, let alone had occasion to intentionally launch a PowerShell terminal.

Given those realities, it would be nice if there were a simple way to disable or at least heavily restrict PowerShell for normal end users for whom it could become more of a liability.

However, Microsoft strongly advises against nixing PowerShell because some core system processes and tasks may not function properly without it. What’s more, doing so requires tinkering with sensitive settings in the Windows registry, which can be a dicey undertaking even for the learned.

Still, it wouldn’t hurt to share this article with the Windows users in your life who fit the less-savvy profile. Because this particular scam has a great deal of room for growth and creativity.

LongNowRoman Krznaric & Kate Raworth

Roman Krznaric & Kate Raworth

Can we meet the essential needs of all beings within the limits of planetary boundaries? Are there any keys to the answer hidden in our histories?

Social philosopher Roman Krznaric and renegade economist Kate Raworth discuss these provocative questions of how we can survive and even thrive by looking to the past for clues on building regenerative, sustainable economic frameworks for our present and future. The doughnut's shape traces the social and planetary boundaries - and within them lies the environmentally safe and socially just space in which humanity and all other living things can thrive.

Cryptogram FBI Shuts Down Chinese Botnet

The FBI has shut down a botnet run by Chinese hackers:

The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations…. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.

The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were “ultimately unsuccessful,” according to the law enforcement agency.

Worse Than FailureCodeSOD: A Managed Session

Some time ago, Roald started an internship on a ASP .Net application. It didn't take long to find some "special" code.

    public string RetrieveSessionString(string sessionName)
	{
		try
		{
			return Session[sessionName].ToString();
		}
		catch (NullReferenceException)
		{
			return null;
		}
	}

The Session variable is a session object for this user session. Each request carries a token which allows us to pair a Session with a user, making a cross-request per-user global object. That is what it is- but it's weird that we call the parameter sessionName. Maybe that's just a bad parameter name- it might be better called sessionKey or something like that.

Of course, the real issue here is it's null handling. Calling ToString on a key that doesn't exist throws a NullReferenceException, so we handle it just to return a null, thus making future NullReferenceExceptions somebody else's problem. Arguably, an empty string would be a better behavior. Still, I hate it.

But Roald also found this function's evil twin:

	public Dictionary<string, string> RetrieveSessionDictionary(string sessionName)
	{
		try
		{
			return (Dictionary<string, string>)Session[sessionName];
		}
		catch (NullReferenceException)
		{
			return null;
		}
	}

This is the same function, but instead of fetching a string, it fetches a dictionary of string/string pairs. It does the same null handling, but notably, doesn't do any error handling for situations where the cast fails.

And suddenly, this makes more sense. They're using the word "session" in two different contexts. There's the Session- a series of HTTP requests sharing the same token- and there's a user's session- settings which represent a unit of work. They're storing a dictionary representing a session in the Session object.

Which leaves this code feeling just… gross. It makes sense, and aside from the awful null handling, I understand why it works this way. It's just awkward and uncomfortable and annoying. I dislike it.

Also, functions which are name RetrieveBlahAsType are generally an antipattern. Either there should be some generics, or type conversions should be left to the caller- RetrieveSession(sessionName).ToString() is clearer with its intent than RetrieveSessionString(sessionName). Maybe that's just my hot take- I just hate it when functions return something converted away from its canonical representation; I can do that myself, thank you.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianVasudev Kamath: Note to Self: Enabling Unified Kernel Image on Debian

Note

These steps may not work on your system if you are using the default Debian installation. This guide assumes that your system is using systemd-boot as the bootloader, which is explained in the post linked below.

A unified kernel image (UKI) is a single executable that can be booted directly from UEFI firmware or automatically sourced by bootloaders with little or no configuration. It combines a UEFI boot stub program like systemd-stub(7), a Linux kernel image, an initrd, and additional resources into a single UEFI PE file.

systemd-boot already provides a hook for kernel installation via /etc/kernel/postinst.d/zz-systemd-boot. We just need a couple of additional configurations to generate the UKI image.

Installation and Configuration

  1. Install the systemd-ukify package:

    sudo apt-get install systemd-ukify
    
  2. Create the following configuration in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    This configuration specifies how to generate the UKI image for the installed kernel and which generator to use.

  3. Define the kernel command line for the UKI image. Create /etc/kernel/uki.conf with the following content:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

Generating the UKI Image

To apply these changes, regenerate the UKI image for the currently running kernel:

sudo dpkg-reconfigure linux-image-$(uname -r)

Verification

Use the bootctl list command to verify the presence of a "Type #2" entry for the current kernel. The output should look similar to this:

bootctl list
      type: Boot Loader Specification Type #2 (.efi)
     title: Debian GNU/Linux trixie/sid (2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi) (default) (selected)
        id: 2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
    source: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
  sort-key: debian
     linux: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
   options: systemd.gpt_auto=no quiet root=LABEL=root_disk ro systemd.machine_id=2d0080583f1a4127ac0b073b1a9d3e61

      type: Boot Loader Specification Type #2 (.efi)
     title: Debian GNU/Linux trixie/sid (2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi)
        id: 2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
    source: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
  sort-key: debian
     linux: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
   options: systemd.gpt_auto=no quiet root=LABEL=root_disk ro systemd.machine_id=2d0080583f1a4127ac0b073b1a9d3e61

      type: Automatic
     title: Reboot Into Firmware Interface
        id: auto-reboot-to-firmware-setup
    source: /sys/firmware/efi/efivars/LoaderEntries-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f

Cleanup and Reboot

Once the "Type #2" entries are generated, remove any "Type #1" entries using the bootctl unlink command. After this, reboot your system to boot from the UKI-based image.

Future Considerations

The primary use case for a UKI image is secure boot. Signing the UKI image can also be configured in the settings above, but this guide does not cover that process as it requires setting up secure boot on your system.

365 TomorrowsThe Tomb

Author: Rosa May M. Bayuga It was one of those days when she thought she had a great sense of smell. Freshly-baked bread, raindrops, laughter, screams and wounds and hurts, she could smell them all. She could smell the smoke from the pyre of fallen leaves that her father poked with a stick in the […]

The post The Tomb appeared first on 365tomorrows.

,

Cryptogram Remotely Exploding Pagers

Wow.

It seems they all exploded simultaneously, which means they were triggered.

Were they each tampered with physically, or did someone figure out how to trigger a thermal runaway remotely? Supply chain attack? Malicious code update, or natural vulnerability?

I have no idea, but I expect we will all learn over the next few days.

EDITED TO ADD: I’m reading nine killed and 2,800 injured. That’s a lot of collateral damage. (I haven’t seen a good number as to the number of pagers yet.)

EDITED TO ADD: Reuters writes: “The pagers that detonated were the latest model brought in by Hezbollah in recent months, three security sources said.” That implies supply chain attack. And it seems to be a large detonation for an overloaded battery.

This reminds me of the 1996 assassination of Yahya Ayyash using a booby trapped cellphone.

EDITED TO ADD: I am deleting political comments. On this blog, let’s stick to the tech and the security ramifications of the threat.

EDITED TO ADD (9/18): More explosions today, this time radios. Good New York Times explainer. And a Wall Street Journal article. Clearly a physical supply chain attack.

EDITED TO ADD (9/18): Four more good articles.

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.15: Updated and New BLP Library

bloomberg terminal

Version 0.3.15 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the fifteenth release since the package first appeared on CRAN in 2016. This release updates to the current version 3.24.6 of the Bloomberg API, and rounds out a few corners in the packaging from continuous integration to the vignette.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.15 (2024-09-18)

  • A warning is now issued if more than 1000 results are returned (John in #377 addressing #375)

  • A few typos in the rblpapi-intro vignette were corrected (Michael Streatfield in #378)

  • The continuous integration setup was updated (Dirk in #388)

  • Deprecation warnings over char* where C++ class Name is now preferred have been addressed (Dirk in #391)

  • Several package files have been updated (Dirk in #392)

  • The request formation has been corrected, and an example was added (Dirk and John in #394 and #396)

  • The Bloomberg API has been upgraded to release 3.24.6.1 (Dirk in #397)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

LongNowSaints Without a Cause

Saints Without a Cause

In the Italian town of Assisi, an ancient cathedral marks the place where St. Francis abandoned his noble raiments for the simple habit of a poor monk. Inside, not far from the relics of saints and former popes, you can behold the mummified body of a 15-year-old web designer named Carlo Acutis.

Acutis’ life, by all accounts, was not particularly remarkable — though it was certainly marked by a strong sense of Catholic piety. Born in London in 01991 to wealthy parents, he moved to Milan, where, from the age of three, he would ask to visit churches. He donated his pocket money to any poor that he met; at school, The Guardian reported, he would offer support to “classmates whose parents were going through divorces.” On account of his eagerness for the Eucharist, he was offered first communion early, at just seven years old. “To always be united to Jesus, this is my program of life,” he wrote at the time.

Acutis passed the time putting his web development skills to use for the local diocese. In addition to making church websites, he maintained a database of approved eucharistic miracles — often grisly stories of bleeding altars and bits of bread, the showstopper setpieces of transubstantiation. He maintained one, too, for miraculous appearances of the Virgin Mary, more than 100 in all, faithfully conveying the lady’s (Vatican-approved) messages of apocalyptic woe.

In 02006, however, his story was tragically cut short. Checking into the hospital with what his parents thought was a mild flu, his condition deteriorated dramatically. Eight days later, he requested the last rites, certain he would die. In the end, his own blood had betrayed him. After falling into a deep coma, on October 11, at 5 p.m., he was declared dead from a rare, rapid, and lethal form of leukemia.

Almost immediately, a campaign was launched to make Acutis the Catholic Church’s newest saint. Acutis’ life, his acts, even his devotion to the church, may not have been particularly profound when set against the miraculous accomplishments of other saints. But in him many Catholics saw a certain relatability — especially among a key demographic. “Carlo is a boy of our time — a boy of the internet age,” Bishop Domenico Sorrentino of Assisi said at the unveiling of his tomb. Here was a saint for a new generation, his earthly body displayed on digital live-streams in a gilet, jeans and Nike kicks.

The year after his death, the Italian journalist Nicola Gori wrote a biography, The Eucharist: My Road to Heaven, making the case for his canonization. The Catholic Church has a rule that no person may be beatified — a prerequisite step for sainthood — less than five years after their death; in 02012, barely a year after that deadline expired, an official campaign was launched. Funds poured in for websites, a traveling exhibition, and prayer cards in churches across Italy and around the world. Fast-tracked through the Vatican, less than 12 months later, the Holy See granted his candidacy for sainthood. Acutis was approved for canonization in July 02024. He’s expected to be the centerpiece of Pope Francis’ next grand canonization ceremony in Rome in October 02024 — a clear example of what Francis once called the “saint next door,” the “middle class of holiness.”

Saints Without a Cause
Shrine to Carlo Acutis. Photo by Andy Scott, CC BY-SA 4.0

Whatever Acutis’ merits, his pathway to the upper ranks of heaven is a particularly stark example of the modern character of sainthood. The “strategic canonization” of “celebrity saints,” in the words of cultural historian Oliver Bennett, has been on the rise since Pope John Paul II reformed the rules for canonization in 01983 and invented the modern “saint factory.” Gone are the lengthy waiting periods and requirement that numerous miracles be attributed to the saint’s intercession. Gone, too, is the internal culture of skepticism and critique: the Devil’s Advocate, a real position charged with arguing against the merits of saints, was largely abolished. As a result, the historian Valentina Ciciliot writes, “the saint becomes a consumer good, increasingly taking on the features of worldly celebrities.” Today, the making of a new saint more closely resembles a political campaign — with all the attendant costs — than it does anything particularly sacred.

And yet, it would seem there is some growing discomfort within the halls of the Holy See about the modern nature of sanctity. At the same time as Acutis was racing towards sainthood, the Vatican was issuing new guidelines on the recognition of certain miracles, clamping down on the proliferation of new visionaries, miracle workers and pilgrimage sites in the age of social media. The new rules centralize the verification of supernatural occurrences in the office of the papacy with a vigor not seen since the 17th century. The effect, the religious historian Philip Almond writes, is to a kind of spiritual retreat — a further “disenchantment” of the world, already made mundane by the relentless march of secularism. “Overall, God will be shown to be minimally directly intervening,” Almond wrote for The Conversation. “A clear declaration of a supernatural event having taken place will virtually never happen” again.

But if we are indeed living in an age of “everyday saints” like Acutis, why is the Vatican so loath to place its seal on these supposed examples of the supernatural? The Catholic public has not lost its appetite for miraculous occurrences and divine manifestations — if anything, it seems to be growing. Hardly a year passes without new accounts of Marian messages, suffering saints, weeping statues, or prophetic encounters that can attract hundreds of adherents.

The reality is, for some time now, the Catholic Church has been undergoing a long, slow shift, playing out on the timescale of centuries: a shift to distance itself from the popular enthusiasms of its most devout parishioners. In fact, the Church’s very conception of the sacred is changing — and with it, perhaps, a whole lot more about the Catholic faith.


In the early days of Christendom, the holy man was hardly an “everyday” figure. The saint of Late Antiquity depended not on benevolence or familiarity for his holiness, but on a strange and radical otherness.

The quintessential holy man, the historian Peter Brown once wrote, was a man of the mountains or desert; a man removed from — indeed, immune to — the ills of settled society. St. Simeon Stylites, a quintessential early saint who set himself apart by festering for 37 years on top of a pillar, was originally a shepherd, “stalking [his] God,” Brown writes, in the “high places of sacrifice.” Marginal places like these were ideal theaters for holiness. Sanctity could be made manifest solely by surviving their hardships, or by wandering into their small settlements and performing works of wonder. Acts of magic — including extreme endurance like that of St. Simeon — were the true mark of sainthood, proof of inhuman abilities that stemmed from their closeness to God.

Figures like Simeon could draw crowds of hundreds to follow them on their wanderings, even if their unusual magnetism could also inspire hostility and suspicion alike from the masses. Almost by necessity, they were, at first, threatening figures — apparitions from beyond the pale; magicians, maybe, or demons masked as holy men. Their apocalyptic wisdom and stark morality flowed forth unprompted, threatening to exorcize the hidden evils of the community. 

But paradoxically, saints like these could quickly become power players in the ancient world — their strangeness and scorn for worldly affairs made them valuable arbiters as decaying empires gave way to bitter factionalism. “Perched on his column, nearer to the demons of the upper air than to human beings, [Simeon] was objectivity personified,” Brown writes. A saint like Simeon could easily stand in for the divine justice of a distant God. Emperors, kings, and commoners alike would seek the counsel of such saints, even (or especially) when they sought to withdraw from society entirely. “The lonely cells of the recluses of Egypt have been revealed, by the archaeologist, to have been well-furnished consulting rooms,” Brown writes.

Saints Without a Cause
Icon of Simeon Stylites the Elder with Simeon Stylites the Younger, 01699.

At the outset, the Christian world was too diffuse to support any kind of coherent theology of who should be eligible for such exalted status. A holy man need not even live a particularly virtuous life if his miracles had won acclaim from the masses. The 6th-century St. Sigismund, the first king to be canonized, had his own son drowned and defended incest rife at his court. The principle of vox populi, vox Dei — the voice of the people is the voice of God — reigned supreme. It was only in 00401, at the Council of Carthage, that bishops were assigned the job of recognizing true saints — though it was largely aimed at stemming the tide of heretical cults, not denying the validity of popular saints.

It was inevitable, the medievalist André Vauchez writes, that the “anarchic proliferation of cults would, in the long run, cause problems.” And indeed, by the 12th century, the papacy finally seemed to feel so, as it increasingly fought to assert its version of the faith over the multitudinous expressions found in newly Christianized areas. Pope Alexander III became the first to assert that the recognition of saints was the exclusive right of the Roman pontiff when he rejected the cult of a murdered Swedish monarch, St. Erik. “Even if prodigies and miracles were produced through his intermediary, you would not be permitted to venerate him publicly as a saint without the authorization of the Roman Church,” he wrote to his successor (and, ironically, murderer) in 01171. 

It was in this era — one of political consolidation for the papacy — that the rules governing the recognition of sainthood were first developed in Italy’s papal states, then exported across Europe. These new rules also marked a subtle shift in thinking about sanctity and holiness. The “prodigies and miracles” that defined the earlier generations of the saints were no longer the sole marker of sanctity, nor the most reliable. Under Innocent III, pope from 01198 to 01216, the Church’s inquests into the causes of saints began to de-emphasize the value of wonder-working in favor of examples of everyday faith and good works. The era of the saint-as-magician was ending; that of the saint-next-door had just begun.

Reflecting the Church’s increasingly legal mood, the process for canonization also evolved, by the 16th century, into the form of a judicial trial, allowing for close examination of witness testimonies and miracles. The standard of evidence was not necessarily raised — but the papacy could now more easily find reasons to reject an undesirable claimant. Necessarily, campaigns to win the pope’s personal favor, now an essential for any candidate, became more costly and more involved. The prayer cards bearing Acutis’ tousle-haired image are the legacy of the innovations of this age: the 15th-century campaign for St. Catherine of Siena’s canonization involved the production of thousands of paper strips depicting scenes from the saint’s life, distributed across Italy for public and private veneration.

As the costs of sainthood ballooned, the recognition of new saints became more of a top-down affair. The wealthy elite of Europe used campaigns for canonization to advance their local rivalries or seize economic gains. Powerful monastic orders played the same game within the Vatican. But in the arms race for new canonizations, something else happened too: the number of miraculous deeds ascribed to saintly figures began to boom. The fame of these new saints grew in proportion to their supposed deeds, as did their popular acclaim. It was not long before the likes of the Virgin Mary and John the Baptist were edged out of their prized places in the apses of grand cathedrals, replaced by the likes of St. Francis or St. Antony of Padua. The papacy had built itself a powerful new army in heaven. Whether it could control it was another question entirely.


Despite the efforts of the papacy to overrule local bishops and impose regularity on canonizations, it was never entirely successful at suppressing what it viewed as embarrassing forms of popular enthusiasm. St. Guinefort, a greyhound venerated as a martyr after it was wrongly blamed for the death of a child and drowned, drew condemnation from the Dominican Stephen of Bourbon in the 13th century. His shrine remained popular in rural France to the end of the 19th century, and websites still sing his praises today.

But in 15th century France, the church faced a more profound challenge in the cult of Joan of Arc, a visionary mystic claiming divine authority, rising on a wave of popular sentiment. For Vauchez, Joan is an example of “a sainthood lived and recognized by simple people.” But at the behest of the English court, the Church threw the full weight of its newly developed legal systems at her. Put on trial for heresy in 01431, she was interrogated ruthlessly, about her faith, her loyalty, and her virginity. “It was […] the first process undertaken by the ‘great minds’ from the universities,” Vauchez writes, “in order to prevent a popular cult from being born and developing.” And yet, just 21 years later, at the behest of France, it was applied again in the other direction. Joan was acquitted in 01456 and, amid a surge in French Catholic ultranationalism, finally canonized in 01920.

Saints Without a Cause
Still from The Passion of Joan of Arc (01928).

The story of Joan of Arc shows how fickle sainthood can be when it is no longer a reflection of a position above society, but popular acclaim from the powers within it. In the Congo, some Catholics have long desired the canonization of an 18th-century “black Jeanne d’Arc”, Kimpa Vita or Dona Beatriz. In many ways, Beatriz’s life has many parallels to Joan’s story; though born into nobility instead of the lower classes, she too professed to have been a medium for saints, embodying, in her case, the spirit of St. Antony. Instead of English peers and French bishops, she fought against Capuchin monks and Portuguese colonists to renew the Congo and the Catholic Church, remaining nonetheless faithful to the pope in Rome.

But Beatriz, like Joan, at times also challenged Catholic orthodoxy in radical ways. She was iconoclastic and opposed to rites like baptism, equating even crucifixes with unchristian fetish objects; she said that Congo was the holy land and the Virgin Mary a slave. Most scandalously, she mothered a secret child. The result, the historian Benjamin Hendrickx writes, was a highly syncretic Indigenous Catholicism — in other words, a truly local expression of the faith in a land colonized by Christ. Pope Paul VI had the opportunity to consider her cause for canonization in 01966. He rejected it outright.

More and more, as the Catholic Church has spread its wings around the globe, it has encountered difficult cases like these in the very marginal places from which ancient saints once sprung. This happens, not least, because Catholic evangelists have long encouraged them. During the colonization of Mexico, the theologian Hans-Jürgen Prien relates, missionaries drew explicit comparisons between Catholic saints and Indigenous gods; St. Simeon, for example, was equated with Huhueteotl, Xiuhtecuhtli, and Mam — fire deities from the Aztec and Mayan religions often depicted as an old man. The result, he says, is “a pantheon of saviors, Virgins, apostles, and saints”: the Lord of Earthquakes in Cuzco, the Lord of the Sea in Callao, or a black madonna ringed in candles — Our Lady of Candelaria — venerated in the Canary Islands, Guatemala, and Mexico.

As secularism advances in Europe and these corners of the faith become more integral to the Church’s survival, the Vatican has struggled to articulate a coherent view on whether these modes of worship should survive or be suppressed. In Brazil, a passionate cult of worship developed in the late 19th century around a saint known as Escrava Anastácia, a slave woman depicted in an iron mask with bright blue eyes. Her legend tells of her dying as a result of the great cruelty of her masters, nonetheless forgiving them on her deathbed. “Various marginal populations, from street children to gays, saw in her not only someone who understood suffering, but who was endowed with the power of an official saint, with none of an official saint’s off-putting formality and distance,” the anthropologist John Burdick writes in Blessed Anastacia: Women, Race and Popular Christianity in Brazil. In 01984, backed by financing from the state petroleum company, Brazilian Catholics mounted a campaign for Anastácia’s canonization. But within a few years, this too was rejected by the Vatican, which ruled that Anastácia, in all likelihood, had never existed.

This outcome was particularly ironic because of the litany of fictional saints that play a central role in Catholic piety. Of the fourteen “holy helpers”, saints whose intercession is believed to be especially effective, half are believed to be entirely invented. Among them are St. Catherine of Alexandria, from whose severed head milk supposedly flowed; Margaret the Virgin, who was swallowed by a dragon; and St. Christopher, who, due to mistranslation, was for centuries depicted with the head of a dog. The Catholic Church suppressed their cults and eliminated their feast days in a series of 01969 reforms; but such was their popularity that by the early 02000s, several had been reinstated. For the saints of the Church’s European canon, at least, fame is not only enough to win a seat in heaven; it can make a real place in history — or at least real enough to count.


Of course, popular enthusiasms like those for Anastácia and Beatriz don’t always take long-dead figures as their object. The Vatican’s new guidelines on the recognition of miracles are primarily aimed at reining in the celebration of contemporary seers, visionaries, and miracle workers, on which the Church has long struggled to articulate a coherent position.

Take, for example, the case of Audrey Santo, a non-responsive girl at the center of an active local cult in Worcester, Massachusetts, who died aged 23 in 02007. Left mute and paralyzed by a 01987 drowning incident, Santo purportedly bore the signs of stigmata, and communion wafers and holy icons would reportedly bleed oil in her presence. Her fame and popularity grew to such an extent that her body, “strapped to a stretcher behind a picture window in a small house in the middle of a football field”, once drew more than 8,000 pilgrims to Massachusetts’ College of the Holy Cross.

In all the time that Santo was purportedly performing her miracles, neither the local bishop nor the greater Catholic hierarchy took a clear position on her sanctity. Bishop Daniel P. Reilly of Worcester would say only that “the most striking evidence of the presence of God in the Santo home is seen in the dedication of the family to Audrey.” A second inquest into the events in her household has never been completed; her case for canonization, in the works since her death in 02007, also seems to have stalled.

Yet as strange as the devotion to Santo may seem, she is simply the latest in a long line of saints and mystics identified as “victim souls”, individuals whose grave suffering in this world mirrors the work of Christ and guarantees the forgiveness of sins for others in the next. The concept has never been formally approved by the Vatican, but it has long held a place in popular Catholic theology. Purported victim souls include respected figures like St. Therese of Lisieux and Anne Catherine Emmerich, a mystic whose time-traveling visions of Jesus identified the house of Mary in Ephesus and inspired scenes in The Passion of the Christ. But it also includes more problematic figures, like Anneliese Michel, a 25-year-old woman subjected to 67 exorcisms before she starved herself to death. Her priests were later convicted of negligent homicide.

With her inability to consent to her place as an object of devotion, Santo presented a particularly challenging public relations issue for the Catholic Church. “These are constituencies of Catholics that are very active and you don’t want to lose,” Matthew Schmalz, a professor of religious studies at the College of the Holy Cross who studied the Santo phenomena, told me. But “others will say it’s horrible, seeing this woman in her bedroom, celebrating suffering […] There’s this idea that an unattractive image of Catholicism is being portrayed.”

But those saints that can still speak often pose greater challenges still. The Vatican’s new guidelines are also aimed at curbing the proliferation of Marian apparitions, supernatural encounters with the Virgin Mary that have surged in popularity since the mid-19th century. In the archetypical cases — Lourdes in 01858 and Fátima in 01917 — such occurrences often centered on individuals that, like the saints of old, came from the margins of society: the children of shepherds, millers, and farmers. And like those earlier saints, this new crop of visionaries seemed prepared to critique society and the Catholic hierarchy in ways more everyday holy figures could not.

Saints Without a Cause
Lúcia Santos, Jacinta and Francisco Marto, the three children who claimed to have encountered the Virgin Mary in Fátima, Portugal, 01917.

One apparition at La Salette in France, which predated those of either Lourdes or Fátima, transmitted critiques of the abuses of farm workers and laborers; the Dutch historian Peter Jan Margry interprets them as a response to the rapid industrialization of the age. Another set of visitations to four schoolchildren in Garabandal, Spain, warned that “many cardinals, bishops and priests are following the road to perdition, and with them they are taking many more souls”; the warning was read by the historian W.A. Christian Jr. as a critique of the reforms of the Second Vatican Council (01962-01965). Neither miracle was ever recognized by the Church;  in fact, Ciciliot writes, since 01950, the Church has only definitively ruled on six such cases, “even though the phenomena often grew without clear guidance and with the involvement of people from many dioceses.”

That’s putting it lightly. Marian apparitions often spawn unrecognized pilgrimage sites that draw thousands of visitors a year, and have given rise to what Margry identifies as an informal network of Marian movements from New York to Japan — often on the basis of a “divergent” theology of redemption. “These […] devotions are by now one of the strongest structural elements in the modern religious field of influence surrounding the Roman Catholic Church,” Margry writes.

That the Church has recognized some of these visitations as miracles and not others, he says, has only encouraged the ultratraditionalist strains in many of these groups, which at their extremes can view the modern papacy as captured by demonic forces. It’s no wonder the Vatican is beginning to question whether the weird world of the miraculous is really on their side.


In past eras of the Church, the Vatican could at times react to heretical critique among the faithful with serious condemnation and the swift imposition of orthodoxy. We need not go back to the inquisitions of the 12th century or the trial of Joan of Arc, either; the Vatican rejected many supposed apparitions between 01950 and 01970, including popular cults like the one at Garabandal.

But since the reforms of the Second Vatican Council, Margry writes, “these devotions are being handled in a much more ambivalent and restrained way.” As in other periods of retraction and reform, the Vatican appears willing to second-guess its past judgements, undercutting its own authority; devotions previously demonized or denied a seal of approval even see their way to canonization, as did Joan of Arc.

It is possible that the Vatican’s new regulations for the recognition of miracles are an effort to arrest this trend, and clamp down on the challenges to Church unity and hierarchy that have emerged in the era of a looser, more expansive dogma. “If you are a bishop, or if you’re a pope, what you do is you have to manage diversity within unity or unity within diversity,” Schmalz said. “The barbed wire fence is when local expressions of the supernatural challenge the church hierarchy in some sense.”

But the race to canonize more saints than in any other time in history seems to suggest that the Church has other motives at play. Rachel M. McCleary and Robert J. Barro, two Harvard economists, have tracked the way that the Vatican has, in recent years, found local heroes to celebrate in areas where competition with Protestant sects and charismatic movements is strongest. These figures need not even be made full saints, McCleary tells me: the prerequisite stage, beatification, limits veneration to the local diocese, making it a perfect means of sanctioning the worship of potentially controversial figures.

And yet, even beatified figures need at least one demonstrated miracle, even if the requirement seems now to be somewhat of an embarrassment for the modern church. For milquetoast figures like Acutis, this often takes the form of miraculous medical recoveries by persons praying to the would-be saint in far-flung lands; most will never ask why Brazilians or Costa Ricans, replete with “everyday saints” in their own backyards, would choose to direct their most desperate prayers to a little-known Italian teenager.  

The cognitive dissonance of a Catholic Church, embarrassed by the miraculous but pumping out new saints at unprecedented speed, leaves many ordinary believers with a somewhat strained relationship to their faith. “Few — and certainly not devotees — can judge what is to be counted as part of the official domain of the Church and its piety, and what is not,” Margry writes.

But the fact is, the Holy See simply cannot survive in a world devoid of the miraculous. Like all religions, the Christian faith has been utterly dependent from its earliest days on what religious scholars often call “enchantment” — a philosophical worldview that allows for the possibility of something beyond the mundane and material. As historian Elizabeth Sutherland writes, holiness itself is “an irreducibly supernatural phenomenon [...]. Participating in the divine life entails a partial sharing in God’s ineffable nature.”

If the Catholic Church imagines a less ineffable world, it necessarily imagines a less holy one. And it’s not only the weird and radical forms of holiness that will be lost. William James, the great American philosopher and psychologist of religion, understood the role of saints as challenging society to better itself, tugging its conscience on the way to greater goodness. A “middle class” of holiness may simply produce a middle class of virtue. If the Church turns its back on its weird and radical roots, dismisses the many rebellions at its margins, or refuses to reckon with many of the most popular — and unorthodox — expressions of the faith, it’s not only the ancient idea of the holy man that will be lost. The saint next door may disappear too.

Krebs on SecurityScam ‘Funeral Streaming’ Groups Thrive on Facebook

Scammers are flooding Facebook with groups that purport to offer video streaming of funeral services for the recently deceased. Friends and family who follow the links for the streaming services are then asked to cough up their credit card information. Recently, these scammers have branched out into offering fake streaming services for nearly any kind of event advertised on Facebook. Here’s a closer look at the size of this scheme, and some findings about who may be responsible.

One of the many scam funeral group pages on Facebook. Clicking to view the “live stream” of the funeral takes one to a newly registered website that requests credit card information.

KrebsOnSecurity recently heard from a reader named George who said a friend had just passed away, and he noticed that a Facebook group had been created in that friend’s memory. The page listed the correct time and date of the funeral service, which it claimed could be streamed over the Internet by following a link that led to a page requesting credit card information.

“After I posted about the site, a buddy of mine indicated [the same thing] happened to her when her friend passed away two weeks ago,” George said.

Searching Facebook/Meta for a few simple keywords like “funeral” and “stream” reveals countless funeral group pages on Facebook, some of them for services in the past and others erected for an upcoming funeral.

All of these groups include images of the deceased as their profile photo, and seek to funnel users to a handful of newly-registered video streaming websites that require a credit card payment before one can continue. Even more galling, some of these pages request donations in the name of the deceased.

It’s not clear how many Facebook users fall for this scam, but it’s worth noting that many of these fake funeral groups attract subscribers from at least some of the deceased’s followers, suggesting those users have subscribed to the groups in anticipation of the service being streamed. It’s also unclear how many people end up missing a friend or loved one’s funeral because they mistakenly thought it was being streamed online.

One of many look-alike landing pages for video streaming services linked to scam Facebook funeral groups.

George said their friend’s funeral service page on Facebook included a link to the supposed live-streamed service at livestreamnow[.]xyz, a domain registered in November 2023.

According to DomainTools.com, the organization that registered this domain is called “apkdownloadweb,” is based in Rajshahi, Bangladesh, and uses the DNS servers of a Web hosting company in Bangladesh called webhostbd[.]net.

A search on “apkdownloadweb” in DomainTools shows three domains registered to this entity, including live24sports[.]xyz and onlinestreaming[.]xyz. Both of those domains also used webhostbd[.]net for DNS. Apkdownloadweb has a Facebook page, which shows a number of “live video” teasers for sports events that have already happened, and says its domain is apkdownloadweb[.]com.

Livestreamnow[.]xyz is currently hosted at a Bangladeshi web hosting provider named cloudswebserver[.]com, but historical DNS records show this website also used DNS servers from webhostbd[.]net.

The Internet address of livestreamnow[.]xyz is 148.251.54.196, at the hosting giant Hetzner in Germany. DomainTools shows this same Internet address is home to nearly 6,000 other domains (.CSV), including hundreds that reference video streaming terms, like watchliveon24[.]com and foxsportsplus[.]com.

There are thousands of domains at this IP address that include or end in the letters “bd,” the country code top-level domain for Bangladesh. Although many domains correspond to websites for electronics stores or blogs about IT topics, just as many contain a fair amount of placeholder content (think “lorem ipsum” text on the “contact” page). In other words, the sites appear legitimate at first glance, but upon closer inspection it is clear they are not currently used by active businesses.

The passive DNS records for 148.251.54.196 show a surprising number of results that are basically two domain names mushed together. For example, there is watchliveon24[.]com.playehq4ks[.]com, which displays links to multiple funeral service streaming groups on Facebook.

Another combined domain on the same Internet address — livestreaming24[.]xyz.allsportslivenow[.]com — lists dozens of links to Facebook groups for funerals, but also for virtually all types of events that are announced or posted about by Facebook users, including graduations, concerts, award ceremonies, weddings, and rodeos.

Even community events promoted by state and local police departments on Facebook are fair game for these scammers. A Facebook page maintained by the police force in Plympton, Mass. for a town social event this summer called Plympton Night Out was quickly made into two different Facebook groups that informed visitors they could stream the festivities at either espnstreamlive[.]co or skysports[.]live.

WHO’S BEHIND THE FAKEBOOK FUNERALS?

Recall that the registrant of livestreamnow[.]xyz — the bogus streaming site linked in the Facebook group for George’s late friend — was an organization called “Apkdownloadweb.” That entity’s domain — apkdownloadweb[.]com — is registered to a Mazidul Islam in Rajshahi, Bangladesh (this domain is also using Webhostbd[.]net DNS servers).

Mazidul Islam’s LinkedIn page says he is the organizer of a now defunct IT blog called gadgetsbiz[.]com, which DomainTools finds was registered to a Mehedi Hasan from Rajshahi, Bangladesh.

To bring this full circle, DomainTools finds the domain name for the DNS provider on all of the above-mentioned sites  — webhostbd[.]net — was originally registered to a Md Mehedi, and to the email address webhostbd.net@gmail.com (“MD” is a common abbreviation for Muhammad/Mohammod/Muhammed).

A search on that email address at Constella finds a breached record from the data broker Apollo.io saying its owner’s full name is Mohammod Mehedi Hasan. Unfortunately, this is not a particularly unique name in that region of the world.

But as luck would have it, sometime last year the administrator of apkdownloadweb[.]com managed to infect their Windows PC with password-stealing malware. We know this because the raw logs of data stolen from this administrator’s PC were indexed by the breach tracking service Constella Intelligence [full disclosure: As of this month, Constella is an advertiser on this website].

These so-called “stealer logs” are mostly generated by opportunistic infections from information-stealing trojans that are sold on cybercrime markets. A typical set of logs for a compromised PC will include any usernames and passwords stored in any browser on the system, as well as a list of recent URLs visited and files downloaded.

Malware purveyors will often deploy infostealer malware by bundling it with “cracked” or pirated software titles. Indeed, the stealer logs for the administrator of apkdownloadweb[.]com show this user’s PC became infected immediately after they downloaded a booby-trapped mobile application development toolkit.

Those stolen credentials indicate Apkdownloadweb[.]com is maintained by a 20-something native of Dhaka, Bangladesh named Mohammod Abdullah Khondokar.

The “browser history” folder from the admin of Apkdownloadweb shows Khondokar recently left a comment on the Facebook page of Mohammod Mehedi Hasan, and Khondokar’s Facebook profile says the two are friends.

Neither MD Hasan nor MD Abdullah Khondokar responded to requests for comment. KrebsOnSecurity also sought comment from Meta.

Planet DebianJamie McClelland: Gmail vs Tor vs Privacy

A legit email went to spam. Here are the redacted, relevant headers:

[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
	*  1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
	*      [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
	*  3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
	*  2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
	*  0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
        by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
        for <xxxxx@mayfirst.org>
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Sep 2024 15:05:53 -0700 (PDT)

At first I though a Gmail IP address was listed in spamhaus - I even opened a ticket. But then I realized it wasn’t the last hop that Spamaus is complaining about, it’s the first hop, specifically the ip 185.220.101.64 which appears to be a Tor exit node.

The sender is using their own client to relay email directly to Gmail. Like any sane person, they don’t trust Gmail to protect their privacy, so they are sending via Tor. But WTF, Gmail is not stripping the sending IP address from the header.

I’m a big fan of harm reduction and have always considered using your own client to relay email with Gmail as a nice way to avoid some of the surveillance tax Google imposes.

However, it seems that if you pursue this option you have two unpleasant choices:

  • Embed your IP address in every email message or
  • Use Tor and have your email messages go to spam

I supposed you could also use a VPN, but I doubt the IP reputation of most VPN exit nodes are going to be more reliable than Tor.

Worse Than FailureCodeSOD: String in your Colon

Anders sends us an example which isn't precisely a WTF. It's just C handling C strings. Which, I guess, when I say it that way, is a WTF.

while(modPath != NULL) {
    p = strchr(modPath, ':');
    if(p != NULL) {
      *p++ = '\0';
    }
    dvModpathCreate(utSymCreate(modPath));
    modPath = p;
} while(modPath != NULL);

We start with a variable called modPath which points to a C string. So long as modPath is not null, we're going to search through the string.

The string is in a : separated format, e.g., foo:bar:goo. We want to split it. This function does this by being very C about it.

It uses strchr to find the address of the first colon. If we think about this in C strings, complete with null terminators, it looks something like this:

"foo:bar:goo\0"
 ^  ^
 |  |
 |  p
modPath

We then replace : with \0 and increment p, doing a "wonderful" blend of using the dereference operator and the post-increment operator and an assignment to accomplish a lot in one line.

"foo\0bar:goo\0"
 ^    ^
 |    |
 |    p
modPath

So now, modPath points at a terminated string foo, which we then pass down through some functions. Then we set it equal to p.

"foo\0bar:goo\0"
      ^
      |
      p
      modPath

This repeats until strchr doesn't find a :, at which point it returns NULL. Our loop is guarded by a check that modPath (which gets set equal to p) can't be null, so that breaks us out of the loop.

And enters us immediately into another, single line loop with no body, which immediately exits as well. I suspect that originally this was written as a do{}while, and then someone realized that it could just be a plain while{}, and completely forgot to remove the second while clause.

This is, honestly, a pretty common idiom in C. It's arguably wrong to even put it here; aside from the bad while clause, you'll see this kind of string handling all the time. But, maybe, that is the WTF.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsVisitation

Author: Soramimi Hanarejima I’m getting a cup of coffee in the office kitchen when suddenly there you are—as the image of yourself you’ve created by projecting your thoughts into mine—fully occupying my attention the way you always do: with emphatic presence. This time in the form of a hard grimace at the garish posters blaring […]

The post Visitation appeared first on 365tomorrows.

,

Planet DebianBenjamin Mako Hill: My Chair

I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.

David BrinReagan would spit in these "republicans'" eyes!

I've been critical of modern Democrats' narrow view of the art of polemics. While the Lincoln Project lands some zingers - and I have modest hopes for the earthy wit of Tim Walz - there appears to be little aptitude for what Sherman did in an earlier phase of this ongoing U.S. Civil War... which is corner the confederacy, so that saner members might look around, smack their foreheads and snap out of the trance!

I offer such tactics, elsewhere. But here's a doozy!  I found it while sorting through old papers my brother stored for me. One memento of the last century stood out. A political flyer from Ronald Reagan's 1970 campaign to be re-elected governor of California.

This special, midweek posting reprints pages from that 1970 flyer!  

Read these excerpts! Because while Reagan's positions were conservative, in the context of that time - (you liberals will find much to disagree with!*) - you'll also be shocked by how mainstream and... progressive... were so many RR talking points.

What it shows... especially to any conservatives who scan here... is that consensus stances of the right wing of the Republican Party were hugely different back then, than those of today's MAGA-ism. 

In many cases diametrically opposite!

At least scan the TITLES to each page! Back then it was consensus among even right wing republicans that the state should protect the helpless, improve the environment and reduce pollution, develop new cleaner energy supplies, invest more in education and universities and public transport/rail, reduce drug costs, protect the consumer, and so on!


And that's just Reagan as Governor in 1970. How about the President who later confronted an aggressive Soviet Kremlin? Do you think he would have any truck with today's KGB-loving GOP, its adoration of Kim Jong Un and Vlad Putin's cadre of 5000 "ex" commissars, now spreading a new Evil Empire? 

Or today's MAGA all-out war vs all fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror?  Ronnie would be outraged!


TO BE CLEAR: I still have grudges toward Reagan, especially his betrayal of America during the Iranian Hostage Crisis. And ignoring AIDS, making it far worse. Union-busting. And his insane Drug War! And sending us down the road of never-ever-correct Supply-Side voodoo 'economics.'

Still, show these pages to others! Read passages aloud to your MAGA aunts. (Your uncles are hopeless, of course.) Look at what Ronald Reagan bragged about achieving in his first term as California governor!  And remember that California was far more conservative then. And RR was from the party's 'far-right.' 

You may need to copy these jpegs onto your desktop, to read them. But read them!


Liberals today may snort at some 'conservative' positions, like school choice. Fine, there were then - and remain - legit arguments. (And forced school busing was not a wise position, either!)  Again, I never claimed Reagan wasn't Reagan!

Still, on balance the campaign brags from this 1970 campaign flyer show a Reagan who would hate Trumpism! It is today's MAGAs and Kremlin-lovers who are the RINOs!  Republicans in Name Only.


 

* All right, I admit it. I did not scan-in and include here every page of the campaign flyer. The page Crisis on Campus, for example, was totally law-n-orderly hostile toward the youth rebels who were then raging at universities. (Remember 1970 at all? Nixon? Cambodia? Vietnam? The then-recent, wretched horrors of 1968? OMG it's a wonder we survived.) It shows that campus protests and even occasional mini riots are nothing new - sometimes with righteous cause and sometimes driven by sanctimony-fetishism. RR's reflexive hostility - when campus protest truly was righteous - showed his dark side.

So? Again, I don't claim Reagan wasn't Reagan!  

What I do claim is that he wouldn't have anything to do with the monstrous, undead, vampire were-elephant that has taken over the Party of Lincoln, on behalf of the same Kremlin-led evil empire Reagan despised, masked only by a few altered symbols and Czarist lapel pins. Reagan would recognize Putin's obvious KGB scheme for what it is, and he'd denouce fools who fall for it. 

He'd also recognize Trump as a painted carnival barker and traitor-monster.

Hey, I don't like Liz Cheney either!  But I admit... she's an American.

Okay now, this next one is unbelievable! Efficient mass transit and rail service to augment highways! Safe cars and roads! "A well-balanced transportation system." And more...


What's to be made of all this? Just a silly old guy (me), rummaging mementos and yattering about old-timey, wordy political flyers that bear no relevance to our republic's current fight for its life?

Bah and posh and fiffle-faffle! Dig it, Ronald Reagan still has redolence with many of our neighbors!  THIS is what he stood for, before he went senile. And maybe 2/3 of it was moderate-consensus... American

Consensus that today's gone-mad GOP now undermines at all levels and in all ways, attacking all the ways that America actually became -- and continues becoming -- great.

You - yes, you - could use this!

Go and do so.


Chaotic IdealismHow to get the whole story

A lot of people complain about media bias–articles with an agenda. The same story gets covered multiple ways; how do you know who’s reporting the most accurately?

One thing you shouldn’t do is try to find “the best” news source and trust that source exclusively. Yes, some sources are more accurate than others, and some are more biased than others (these are two different things; accuracy is whether they have the right information, bias is what spin they are putting on the information they are presenting). But every source you encounter will be less than perfect, because–surprise, gasp!–they are all run by human beings. (Yes, even AI. It’s trained on human work; it counts.)

Instead, you should rely on a lot of different news sources. There are some aggregator web sites that do their best to enable this practice, such as AllSides and Ground News, which present the same story as told by multiple sources and categorize them by political slant. Even when there are inaccurate or biased news sources, when you read from more than one source you can see where the differences are. Keep your eye open for:

1. Level of falsifiability.
An accurate news source will present facts that are easy to check, if not for you, then at least for someone. If it’s possible to ask yourself, “What evidence could I find to disprove this statement?” and immediately come up with a simple test, perform the test, and disprove or fail to disprove the statement, that is a good news source.

(The worst news sources will actually make inaccurate claims that are easily falsified–but they soon lose their reputation with everyone but their intended audience, which is often strongly politically polarized. One can identify and avoid these relatively easily.)

Let’s say that the story is, “Olafistan has bombed Svenoslakia and hit a hospital.” The best news sources are the ones that report evidence of the bombing and its target in a factual manner that can be verified by other people. For example: “This video from Svenoslakia shows Olafistanian bombers flying in the airspace above the hospital.” Or, “Satellite imagery shows multiple explosions at and near the hospital.” If the video can be analyzed and the satellite imagery reviewed, then that’s a falsifiable statement–it is possible to find evidence that the news source is wrong.

Unfalsifiable claims are those that, when they are made, are difficult to disprove. For example, “Olafistan did not intend to bomb the hospital.” That’s a statement of intent, and trying to disprove it is difficult. Instead, a responsible journalist will say something like, “The Olafistanian prime minister said that his military did not intend to bomb the hospital.” When covering unfalsifiable claims, a reliable news source won’t try to report them as fact at all; rather, they will tell us who is making those claims. Which leads us to:

2. Openness about sources.
If your news article doesn’t tell you where it got its news, or if the source is difficult or impossible to verify or contact, that’s highly suspect. The people that newspaper reporters interview when they get their information are human, and thus may be biased or inaccurate. Knowing who those sources are tells you about what biases they may have and what the limits of their knowledge might be, and lets you judge how reliable their statements are.

Take a neutral statement like, “The Svenoslakian president held a press conference in which he said that Olafistan has bombed a hospital.” That’s the sort of statement you’ll find in a high-quality article. It gives us the source of the information–the president of Svenoslakia, who may have a vested interest in branding Olafistan as the type of country that will bomb hospitals–but it neither calls him a liar nor assumes that he is telling the truth. An article biased toward Svenoslakia might not even give the source, expecting you to take the Svenoslakian president’s word as true by default, whereas the Olafistan side may emphasize the reasons why Svenoslakia’s president might falsely make that claim.

3. Selective Reporting
A reporter can interview people, read documents, and accurately report what they learned, but if they choose biased sources without balance, they can still end up with a biased story.

Choosing to report only some of the facts can severely bias a story even when all of the facts that are reported are true. If, when you look at multiple versions of the same story, one version leaves things out, check to make sure it’s editing for brevity, or is an earlier article written before the new information became available, rather than telling only one side of the story.

In some situations, reporters simply can’t reach people to interview for both sides of a story. In that situation, the reporter has to say, “We couldn’t talk to this source, though we tried.” Cliched, but helpful: It reminds the reader that there is another side to the story.

Remembering to report all sides of a story is particularly important when one side of the story is an authority figure–say, a police officer or a military leader–and the other side is a civilian, especially of a socially or politically disadvantaged minority. There are limits; one doesn’t need to interview a flat-Earther every time one interviews a geologist. But if there is any credible other side, balance demands that both be given their say.

(Yes, even with terrible criminals. Interview both the defense and the prosecution. Any story that just sensationalizes the crime should be seen as suspect. See below.)

4. Use of emotion and emotionally-sensitive topics
Emotional reasoning shuts down logical thinking. This is a good thing in situations where we have to make quick decisions, but not when we’re analyzing a news story. Using emotional reasoning and emphasizing emotionally-loaded aspects of a story are signs that the reporter is trying to draw more readers, wants to make you shut down your mind and lead with your emotions, or both.

The story about the hospital bombing is ripe for emotional reasoning. A reporter can talk about the bombing in a factual way… but they can also immediately dive into the deaths of patients on the children’s ward. If they do that, they risk shutting down your logic and triggering your anger and pity–and if it turns out that Olafistan is not, in fact, the one who bombed that hospital, your emotions will be a barrier to analyzing the facts and coming to that conclusion.

But the hospital bombing still needs to be covered, and the children’s ward is part of the story. When covering highly-emotional topics, reporters need to be very careful to avoid turning off their readers’ brains. But not all reporters do. If there is no effort made to avoid that emotional brain-shutdown, suspect a biased news source.

Common emotionally sensitive topics include the health and welfare of children, the deaths of children, child sexual abuse, sexual assault, and murder, especially of young people deemed innocent. If these topics are reported in a sensationalized fashion, suspect a biased news source and a reporter writing for clicks and ad revenue rather than accuracy.

5. News/Consumer Interactions
When you read a news story, you’re not a passive observer. You’re actively correlating the information in the story with every other piece of information it connects to in your mind, which means you are bringing your own biases, emotions, and previously-learned ideas to the table and filtering the new information through them. Like a reporter, your brain can ignore sources, emphasize emotional information, and see only some of the whole picture. In some instances, we come to the table with the desire to have a story turn out in a particular way.

Let me give an example here. I am disabled, and I have read a good many news stories about police shootings of disabled people, especially those who are minority race and low-income as well. When I read a story about a police shooting, I am likely to come to it with the assumption that the police officer was in the wrong, that the shooting could have been prevented, and that the victim was likely disabled and/or minority race. When the story doesn’t align with those ideas, I take more convincing than when it does. I need to be aware of that bias.

Cognitive biases to watch out for when reading news
Those biases can’t be erased; our brains use them as shortcuts to analyze things quickly. But what we can do is be aware of them and take them into account. That won’t work perfectly, either, but it’s part of the solution. Here are some very common ones that we should be aware of:

  1. Just-world fallacy: The belief that, if something bad has happened to someone, they must deserve it. Leads to villifying the victim of a crime or natural disaster (“They should have known better than to build in a flood plain!”).
  2. Black-and-white thinking: The desire to have villains and heroes, and to side with the heroes. Leads to minimizing the misdeeds of designated heroes and emphasizing those of the designated villains (“They must have had a good reason to bomb the hospital.”).
  3. Confirmation bias: The tendency to believe things that align with your already-established beliefs and reject things that don’t.
  4. Compassion collapse: The tendency to have more compassion for individuals and small groups of identifiable people than for large groups of faceless victims (“a million is a statistic”).
  5. Zero-risk bias: The preference for reducing a small risk to zero, while ignoring larger risks (“To be completely safe from vaccine injury, I’ll avoid being vaccinated.”)
  6. Availability bias: The tendency to overestimate the occurrence of highly memorable events; for example, a swimmer takes precautions against being bitten by a shark, but not against drowning in the ocean.
  7. Affinity bias: Sympathizing more with people who are more like yourself. (“I feel for that murder victim; they liked jazz just like I do.”)

These cognitive tendencies are nothing to be ashamed of. They are just products of the way our brains work to process information–a flood of information–quickly and effectively. If we stopped to analyze everything fully, we would be completely overwhelmed by information (as autistic people, who may have fewer cognitive shortcuts available, are often overwhelmed by information–especially sensory input). So it’s important to be aware of the shortcuts your brain takes and how they interact with the news you read, to get the whole story.

Planet DebianJonathan Dowland: ouch, part 2

Things developed since my last post. Some lesions opened up on my ankle which was initially good news: the pain substantially reduced. But they didn’t heal fast enough and so medics decided on surgical debridement. That was last night. It seemed to be successful and I’m in recovery from surgery as I write. It’s hard to predict the near-future, a lot depends on how well and fast I heal.

I’ve got a negative-pressure dressing on it, which is incredible: a constantly maintained suction to aid in debridement and healing. Modern medicine feels like a sci fi novel.

Planet DebianJonathan Dowland: ouch, part 3

The debridement operation was a success: nothing bad grew afterwards. I was discharged after a couple of nights with crutches, instructions not to weight-bear, a remarkable, portable negative-pressure "Vac" pump that lived by my side, and some strong painkillers.

About two weeks later, I had a skin graft. The surgeon took some skin from my thigh and stitched it over the debridement wound. I was discharged same-day, again with the Vac pump, and again with instructions not to weight-bear, at least for a few days.

This time I only kept the Vac pump for a week, and after a dressing change (the first time I saw the graft), I was allowed to walk again. Doing so is strangely awkward, and sometimes a little painful. I have physio exercises to help me regain strength and understanding about what I can do.

The donor site remained bandaged for another week before I saw it. I was expecting a stitched cut, but the surgeons have removed the top few layers only, leaving what looks more like a graze or sun-burn. There are four smaller, tentative-looking marks adjacent, suggesting they got it right on the fifth attempt. I'm not sure but I think these will all fade away to near-invisibility with time, and they don't hurt at all.

I've now been off work for roughly 12 weeks, but I think I am returning very soon. I am looking forward to returning to some sense of normality. It's been an interesting experience. I thought about writing more about what I've gone through, in particular my experiences in Hospital, dealing with the bureaucracy and things falling "between the gaps". Hanif Kureishi has done a better job than I could. It's clear that the NHS is staffed by incredibly passionate people, but there are a lot of structural problems that interfere with care.

Worse Than FailureCodeSOD: String Du Jour

It's not brought up frequently, but a "CodeSOD" is a "Code Sample of the Day". Martin brings us this function, entitled StringOfToday. It's in VB.Net, which, as we all know, has date formatting functions built in.

Public Function StringOfToday() As String
	Dim d As New DateTime
	d = Now

	Dim DayString As String
	If d.Day < 10 Then
		DayString = "0" & d.Day.ToString
	Else
		DayString = d.Day.ToString
	End If

	Dim MonthString As String
	If d.Month < 10 Then
		MonthString = "0" & d.Month.ToString
	Else
		MonthString = d.Month.ToString
	End If

	Dim YearString As String = d.Year.ToString
	Return YearString & MonthString & DayString
End Function

There's not much new here, when it comes to formatting dates as strings. Grab the date, and pad it if it's less than 10. Grab the month, and pad it if it's less than 10. Grab the year, which will be 4 digits anytime within the last 2,000 years or so, so we don't need to pad it. Concatenate it all together, and voila: a date string.

Mostly, I just enjoy this because of the name. StringOfToday. It's like I'm in a restaurant. "Excuse me, waiter, what's the string of the day?" "Ah, a piquant 8 digit numeric string, hand concatenated using the finest ampersands, using a bounds checked string type." "Oh, excellent, I'm allergic to null terminators. I'll have that."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsDown A Shiver

Author: Majoki Generals like to look good. Even in the 34th century. Even after a thousand years of war. They like polish and shine and finely fitted uniforms, so they like me. Their tailor. Otherwise, how could a simple tailor expect to live through the entire Sidereal War. Only the most powerful could dictate who […]

The post Down A Shiver appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Book That Broke the World

Review: The Book That Broke the World, by Mark Lawrence

Series: Library Trilogy #2
Publisher: Ace
Copyright: 2024
ISBN: 0-593-43796-9
Format: Kindle
Pages: 366

The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory.

At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother.

I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together.

The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved.

I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes.

One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it.

In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy.

This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain.

It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better.

It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell.

The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance.

The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly.

The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced.

That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much.

If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending.

Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025.

Rating: 7 out of 10

Cryptogram Python Developers Targeted with Malware During Fake Job Interviews

Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article

These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, when a number of popular open source Python tools were maliciously duplicated with added malware. Now, though, there are also attacks involving “coding tests” that only exist to get the end user to install hidden malware on their system (cleverly hidden with Base64 encoding) that allows remote execution once present. The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS.

Planet DebianDirk Eddelbuettel: nanotime 0.3.10 on CRAN: Update

A minor update 0.3.10 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release updates one S4 methods to very recent changes in r-devel for which CRAN had reached out. This concerns the setdiff() method when applied to two nanotime objects. As it only affected R 4.5.0, due next April, if rebuilt in the last two or so weeks it will not have been visible to that many users, if any. In any event, it now works again for that setup too, and should be going forward.

We also retired one demo function from the very early days, apparently it relied on ggplot2 features that have since moved on. If someone would like to help out and resurrect the demo, please get in touch. We also cleaned out some no longer used tests, and updated DESCRIPTION to what is required now. The NEWS snippet below has the full details.

Changes in version 0.3.10 (2024-09-16)

  • Retire several checks for Solaris in test suite (Dirk in #130)

  • Switch to Authors@R in DESCRIPTION as now required by CRAN

  • Accommodate R-devel change for setdiff (Dirk in #133 fixing #132)

  • No longer ship defunction ggplot2 demo (Dirk fixing #131)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cryptogram Legacy Ivanti Cloud Service Appliance Being Exploited

CISA wants everyone—and government agencies in particular—to remove or upgrade an Ivanti Cloud Service Appliance (CSA) that is no longer being supported.

Welcome to the security nightmare that is the Internet of Things.

Worse Than FailureCodeSOD: A Clever Base

Mark worked with the kind of programmer who understood the nuances and flexibility of PHP on a level like none other. This programmer also wanted to use all of those features.

This resulted in the Base class, from which all other classes descend.

Mark, for example, was trying to understand how the status field on a Widget got set. So, he pulled up the Widget code:

class Widget extends Base {
    public function getOtherWidgets(){return $this->widgetPart->otherWidgets;}
	public function getStatus(){
		$this->otherWidgets;
    	if(isset($this->status))return $this->status;
    }
}

So, getStatus doesn't always return a value. That's fun, but I guess it doesn't return a value when $this->status doesn't have a value, we can let that slide.

The line above that return is odd, though. $this->otherWidgets. That sure as heck looks like a property access, not a function call. What's going on there?

I'll let Mark explain:

if it can't find a property called "otherWidgets", it uses PHP's magic __get, __set, and __call to create a call to getOtherWidgets.

Which, as we can see, getOtherWidgets calls into a WidgetPart, which also does the same magic, and calls its own getOtherWidgets.

class WidgetPart extends Base {
	
	public function getOtherWidgets() {
		$part = $this->name;
		$widgets = array();
		$statuses = self::checkPartStatusForWidget($part,self::$widgetPartList);
		foreach($statuses[$widget] as $part=>$status){
			$widgets[] = Widget::find($part)->self($w)->inline($w->status = $status);
		}
		return $widgets;
	}
}

This starts out pretty normal. But this line has some oddness to it:

$widgets[] = Widget::find($part)->self($w)->inline($w->status = $status);

Okay, find makes sense; we're doing some sort of database lookup. What is self doing, though? Where did $w come from? What the heck is inline doing?

That's certainly what Mark wanted to know. But when Mark put in debugging code to try and interact with the $w variable, he got an undefined variable warning. It was time to look at the Base class.

class Base {
	
	// <snip>
	
	/**
    *  Attach variable name to current object
    */         
	public function self(&$variable) {
        $variable = $this;
        return $this;
    }
	
	/**
    * Allows you to preform a statement and maintain scope chain
    */
    ## Widget::find('myWidget')->self($widget)->inline(echo $widget->name)->part->...
	public function inline(){
        return $this;
    }
	
}

self accepts a variable by reference, and sets it equal to this, and then returns this.

inline doesn't do anything but return this.

Somehow, inline doesn't take parameters, but a statement in the parentheses gets evaluated. I can't accurately explain how this works. I can't even try getting these snippets to behave anything like this- clearly, there's more "magic" happening around the inline function to allow the inline execution of a statement, which Mark didn't provide.

Honestly, that's for the best- I'm not sure I want to see that. (Actually, I'd love to see that, but I'm a glutton for punishment)

But this is a whole lot of magic to allow us to play code golf. Without the magic, you could just… write a few lines.

$w = Widget::find($part);
$w->status = $status;

You don't need to do any of this. It certainly doesn't make the code cleaner or easier to understand. And I certainly can't explain what the code is doing, which is always a problem.

It's the worst kind of code: clever code. May the programming gods save us from clever programmers.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. ProGet costs less than half of Artifactory and is just as good. Our easy-to-read comparison page lays out the editions, features, and pricing of the different editions of ProGet and Artifactory.Learn More.

365 TomorrowsTrouble on Macho

Author: Julian Miles, Staff Writer Yet again, we’re a long way from home. As usual, I get everyone’s attention with a short blast of the klaxon, which – also as usual – prompts a round of rude guesswork over the comm as the likelihood of me ever having another sex partner. “You’re still not funny, […]

The post Trouble on Macho appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Wings Upon Her Back

Review: The Wings Upon Her Back, by Samantha Mills

Publisher: Tachyon
Copyright: 2024
ISBN: 1-61696-415-4
Format: Kindle
Pages: 394

The Wings Upon Her Back is a political steampunk science fantasy novel. If the author's name sounds familiar, it may be because Samantha Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon awards. This is her first novel.

Winged Zemolai is a soldier of the mecha god and the protege of Mecha Vodaya, the Voice. She has served the city-state of Radezhda by defending it against all enemies, foreign and domestic, for twenty-six years. Despite that, it takes only a moment of errant mercy for her entire life to come crashing down. On a whim, she spares a kitchen worker who was concealing a statue of the scholar god, meaning that he was only pretending to worship the worker god like all workers should. Vodaya is unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's wings are ripped from her back and crushed in the hand of the god, and she's left on the ground to die of mechalin withdrawal.

The Wings Upon Her Back is told in two alternating timelines. The main one follows Zemolai after her exile as she is rescued by a young group of revolutionaries who think she may be useful in their plans. The other thread starts with Zemolai's childhood and shows the reader how she became Winged Zemolai: her scholar family, her obsession with flying, her true devotion to the mecha god, and the critical early years when she became Vodaya's protege. Mills maintains the separate timelines through the book and wraps them up in a rather neat piece of symbolic parallelism in the epilogue.

I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a story about a political awakening, in which Zemolai slowly realizes that she has been manipulated and lied to and that she may, in fact, be one of the baddies. The Wings Upon Her Back is more personal than some other books with that theme, since Zemolai was specifically (and abusively) groomed for her role by Vodaya. Much of the book is Zemolai trying to pull out the hooks that Vodaya put in her or, in the flashback timeline, the reader watching Vodaya install those hooks.

The flashback timeline is difficult reading. I don't think Mills could have left it out, but she says in the afterword that it was the hardest part of the book to write and it was also the hardest part of the book to read. It fills in some interesting bits of world-building and backstory, and Mills does a great job pacing the story revelations so that both threads contribute equally, but mostly it's a story of manipulative abuse. We know from the main storyline that Vodaya's tactics work, which gives those scenes the feel of a slow-motion train wreck. You know what's going to happen, you know it will be bad, and yet you can't look away.

It occurred to me while reading this that Emily Tesh's Some Desperate Glory told a similar type of story without the flashback structure, which eliminates the stifling feeling of inevitability. I don't think that would not have worked for this story. If you simply rearranged the chapters of The Wings Upon Her Back into a linear narrative, I would have bailed on the book. Watching Zemolai being manipulated would have been too depressing and awful for me to make it to the payoff without the forward-looking hope of the main timeline. It gave me new appreciation for the difficulty of what Tesh pulled off.

Mills uses this interwoven structure well, though. At about 90% through this book I had no idea how it could end in the space remaining, but it reaches a surprising and satisfying conclusion. Mills uses a type of ending that normally bothers me, but she does it by handling the psychological impact so well that I couldn't help but admire it. I'm avoiding specifics because I think it worked better when I wasn't expecting it, but it ties beautifully into the thematic point of the book.

I do have one structural objection, though. It's one of those problems I didn't notice while reading, but that started bothering me when I thought back through the story from a political lens. The Wings Upon Her Back is Zemolai's story, her redemption arc, and that means she drives the plot. The band of revolutionaries are great characters (particularly Galiana), but they're supporting characters. Zemolai is older, more experienced, and knows critical information they don't have, and she uses it to effectively take over. As setup for her character arc, I see why Mills did this. As political praxis, I have issues.

There is a tendency in politics to believe that political skill is portable and repurposable. Converting opposing operatives to the cause is welcomed not only because they indicate added support, but also because they can use their political skill to help you win instead. To an extent this is not wrong, and is probably the most true of combat skills (which Zemolai has in abundance). But there's an underlying assumption that politics is symmetric, and a critical reason why I hold many of the political positions that I do hold is that I don't think politics is symmetric.

If someone has been successfully stoking resentment and xenophobia in support of authoritarians, converts to an anti-authoritarian cause, and then produces propaganda stoking resentment and xenophobia against authoritarians, this is in some sense an improvement. But if one believes that resentment and xenophobia are inherently wrong, if one's politics are aimed at reducing the resentment and xenophobia in the world, then in a way this person has not truly converted. Worse, because this is an effective manipulation tactic, there is a strong tendency to put this type of political convert into a leadership position, where they will, intentionally or not, start turning the anti-authoritarian movement into a copy of the authoritarian movement they left. They haven't actually changed their politics because they haven't understood (or simply don't believe in) the fundamental asymmetry in the positions. It's the same criticism that I have of realpolitik: the ends do not justify the means because the means corrupt the ends.

Nothing that happens in this book is as egregious as my example, but the more I thought about the plot structure, the more it bothered me that Zemolai never listens to the revolutionaries she joins long enough to wrestle with why she became an agent of an authoritarian state and they didn't. They got something fundamentally right that she got wrong, and perhaps that should have been reflected in who got to make future decisions. Zemolai made very poor choices and yet continues to be the sole main character of the story, the one whose decisions and actions truly matter. Maybe being wrong about everything should be disqualifying for being the main character, at least for a while, even if you think you've understood why you were wrong.

That problem aside, I enjoyed this. Both timelines were compelling and quite difficult to put down, even when they got rather dark. I could have done with less body horror and a few fewer fight scenes, but I'm glad I read it.

Science fiction readers should be warned that the world-building, despite having an intricate and fascinating surface, is mostly vibes. I started the book wondering how people with giant metal wings on their back can literally fly, and thought the mentions of neural ports, high-tech materials, and immune-suppressing drugs might mean that we'd get some sort of explanation. We do not: heavier-than-air flight works because it looks really cool and serves some thematic purposes. There are enough hints of technology indistinguishable from magic that you could make up your own explanations if you wanted to, but that's not something this book is interested in. There's not a thing wrong with that, but don't get caught by surprise if you were in the mood for a neat scientific explanation of apparent magic.

Recommended if you like somewhat-harrowing character development with a heavy political lens and steampunk vibes, although it's not the sort of book that I'd press into the hands of everyone I know. The Wings Upon Her Back is a complete story in a single novel.

Content warning: the main character is a victim of physical and emotional abuse, so some of that is a lot. Also surgical gore, some torture, and genocide.

Rating: 7 out of 10

David BrinAre we making new kinds of 'ecosystems'? Are AIs top predators? And what about ART?

Let's take a step back and look at the context for life on EARTHâ„¢ and where Artificial Intelligences may fit in it all.

What will we see as we develop ever more complex societies of interacting cyber-entities?

(Some of the images that follow are from keynotes that I delivered to the Beneficial AGI Conference in Panama City (see pod-followup) and to the big RSA Conference in May.)


== The truly big picture on cyber-entities ==

For starters, for well-or-ill, we are creating a New Ecosystem on Earth, one that's equivalent to the current living system on our planet. 


The older one is a 4 billion year-old process that passes energy along a gradient. A slope of degrading free energy that starts with SUNLIGHT feeding PLANTS which are then consumed by HERBIVORES, which in turn give up distilled and concentrated resources to CARNIVORES, who in turn feed PARASITES. And all of the above feed the fungi and other thanatotrophs, when they die, restoring nutrients to the soil. 


All of that is familiar to you, of course. But what we often neglect to note is how this is not a closed system. All of that consuming generates entropy which cannot be allowed to accumulate! Fortunately, the biosphere is flushed free of most entropy, which escapes into space as thermal radiation, or infrared, allowing the planet to cool enough for fresh, high quality sunlight to do its work.



Okay, much or most, or all of that you already knew. So, what does that 4 billion year natural ecosystem have to do with AI?

I posit a new ecosystem: one that's based - instead of directly on sunlight - upon  ELECTRICITY - MEMORY SPACE - CLOCK CYCLES - DATA.   


We have already seen parts of the new ecosystem emulating the old one's key component - living organisms. For a decade there have been free-floating algorithms, wandering and replicating all over the Internet. (A topic for another time.) These appear to share many traits with the unicellular micro-organisms that filled Earth's seas during the first 3 billion years or so.


See how many parallels the new ecosystem of voltages and bits parallels the old one, again, relying on steep slopes of usable free-energy.




Note that this will include 'parasites'. It can be argued these are already preying on us via memes. And there's the same - and growing - problem of expelling entropy (waste heat) into space before we broil.

Speak up, in comments, if you see a flaw in this parallelism. Or if you are offended to see human game-players and tic-tockers portrayed as the equivalent of fungi and thanatotrophs, supping off the excretions of sophisticated artificial entities and contributing little, other than waste heat that must go somewhere!



== Following this reasoning... where do productive, sapient humans fit in all this? ==


Well, for one thing, we should recall that right now, organic human beings (orgs) still have huge power over this new ecosystem, controlling the creation and distribution of electricity flows, chips, and memory space... 


...though though we appear to have less and less ability to control flows of the other basic food source... data


If we were united and open, we as a civilization could use this control to guide outcomes. We could allocate these resources to cyber entities by choice and by regulation, doling them out by fiat, as the EU folks seem to want to do. 


But alas, that kind to top-controlled allocation of resources will work about as well as it did under 6000 years of feudalism... in other words, very poorly.


Or else, we could try the enlightenment approach!  The tools and tricks and abilities that we developed across the last two centuries, with gradually rising sophisication. Those methods start with making things relatively transparent and then using incentives to get cyber entities competing against one-another, in ways that benefit us. 


Those incentives could be rewarding pr-social AIs with clock cycles, energy or memory space... all of which such entities need, in order to reproduce!  And that is the grist of evolution. In other words, we control the sun. For a while. So let's choose to shine it upon those AIs who choose to side with us.


One thing we could do. Just like we do with humans who seek our trust, or who seek to do commerce with us. We can refuse to do business with those who don't show verifiable ID.


More on that later.



== So how will this affect art? ==


Okay, here's an interview I gave a reporter for Vanity Fair:


Do you believe AI may help to "democratize" certain artistic and creative endeavors, particularly those that have traditionally been available only to a handful of aspirants due either to prohibitive resource requirements an/or intentional gatekeeping (e.g filmmaking, music production, animation)?


There will always be expert castes, whose abilities - in the arts or practical skills - allow them to rise in the esteem of neighbors and society. The path of merit and accomplishment was always one option, even when a vast majority of nations were dominated by 'noble' families and lines of inheritance brats. 


But which abilities? At any metro or subway stop there's often a musician busker playing an instrument for tossed change, with skill that would have garnered acclaim, back in eras when music was rare. Will the kids who now proclaim "I'll be a YouTuber or TikTok star!" achieve their dreams, when AI simulants can take hilarious pratfalls that no actual human could survive?


If so, do you believe AI's democratization of such artistic endeavors would result in a net positive or a net negative for how we consume and appreciate art?


Authors like me flatter ourselves that we can team up with software agents that will ease the laborious torment of writing while leaving us in charge of creating characters and deeply-moving prose.  But humans may be the ones demoted to mere helper status... unless we can make arrangements for what Reid Hoffman calls "augmented combinations" of organic and inorganic minds, greater than the sum of the parts 


Similarly, do you believe the democratization of art via AI would result in a net positive or a net negative for those who create art?


Many professions are 'middle class.' For example, a skilled engineer or teacher is unlikely to ever be very poor or very rich, but will in all probability have some kind of mid-level comfort and security. The arts are more like primitive feudal societies, a steep pyramid with a teensy elite, a few more who make a decent living... and masses who yearn for artistic recognition. Modern self-publishing tools have enabled far more aspiring writers to 'get published.' But thriving at it is still the same old mix of skill, hard work, contacts and luck (See my "Advice to Rising Writers")


So far, in many realms, from Radiology to chess, we see teams of human and AI doing better than either does, alone. Supposing that continues, an aspiring artist or creator will want to be very choosy which model of AI to partner with!  Be aware that the AIs may be picky, too!


Who stands to lose and who stands to gain as AI technologies are increasingly incorporated into creative industries such as the movie-making industry or the music industry?


For a decade I've predicted that the 'animated storyboard' will become an art form in its own right. Take an excellent script, a skilled photographer, a musician and charismatic voice actors, and you should be able to do a full-length, action- (or emotion-) packed feature or TV episode with all the right beats, even if the animated figures onscreen clearly 'aren't real.' Directors would view such a system as a director's tool. Producers would view such a system as a producer's tool. But the one who'll be truly empowered, I believe, would be the writer, whose script is being reified by the small team using the program. And will we need voice actors? That could very likely be a legal matter!  


Well, that's what I thought, and it's still a crucial idea. But then, will the AIs do the writing, too? Never!


Does the best art require a human touch, or could artificial intelligence theoretically create art that is just as entertaining and evocative as any that humans have made?


So far? Absolutely!  The so-called artificial 'intelligences' are (as-yet) nothing of the sort. They are very intensive, probabilistic-iterative auto-complete programs. There is no way there even can be anything sapient, under the hood. But they will seem so to millions, easily passing the old 'Turing Tests' and fooling us, especially when we aren't very wary.  


Eventually, there will be actual AI! The question is: will we even be able to tell the difference when it happens?  I talk about that in my WIRED article that breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th. That AI entities can only be held accountable if they have individuality... even 'soul'... 


Do you believe there is anything ethically profane about the replacement of artists such as actors, writers, directors, and editors with artificial intelligence, more so than the similar replacement of any non-creative job by AI (e.g accounting, law work, data analysis)?


Our top responsibility is to the world and to our descendants. Frankly, I am unbothered by the prospect that some of those heirs will be made largely of metal and silicon, even breathing hard vacuum as they explore planets and stars on our... on my... behalf. What do I care about is doing my job - teaching them (and our regular/organic-style children, too) how to be decent people, with expansively curious and inclusive attitudes. 


If that happens, then they will care about us old-style farts.... as we care for older generations who helped bring us to this era of marvels, when we had our own brief turn at creating wonders.  



== Will our descendants be decent folks? == 


If my heirs - organic and inorganic - are better and smarter than me, fine!  So long as they are decent folks, who enjoy beauty and fairness and puzzles to solve and diversity to appreciate... and an occasional corny joke.


In which case, they may use all those super brains to act in ways that make us proud. That is the one desirable 'soft landing,' as far as I'm concerned.


Cory DoctorowAnti-cheat, gamers, and the Crowdstrike disaster

A psychedelic, brightly colored castle wall with turrets. It floats on in an existential background of a glowing, neon green grid that meets a code waterfall as seen in the credit sequences of the Wachowskis' 'Matrix' films. The words GAME OVER are centered above the wall in the sky, in blocky, glowing, 8-bit type. The wall is shattered and peering out of it is a shadowy hacker in a hoodie. Next to the shattered wall is a red 'insert coin' slot from a vintage arcade game.

This week on my podcast, I read my latest Pluralistic.net column, “Anti-cheat, gamers, and the Crowdstrike disaster” about the way that gamers were sucked into the coalition to defend trusted computing, and how the Crowdstrike disaster has seen them ejected from the coalition by Microsoft:


As a class, gamers *hate* digital rights management (DRM), the anti-copying, anti-sharing code that stops gamers from playing older games, selling or giving away games, or just *playing* games:

https://www.reddit.com/r/truegaming/comments/1x7qhs/why_do_you_hate_drm/

Trusted computing promised to supercharge DRM and make it orders of magnitude harder to break – a promise it delivered on. That made gamers a weird partner for the pro-trusted computing coalition.

But coalitions are weird, and coalitions that bring together diverging (and opposing) constituencies are *very* powerful (if fractious), because one member can speak to lawmakers, companies, nonprofits and groups that would normally have nothing to do with another member.

Gamers may hate DRM, but they hate *cheating* even more. As a class, gamers have an all-consuming hatred of cheats that overrides all other considerations (which is weird, because the cheats are *used* by gamers!). One thing trusted computing is pretty good at is detecting cheating. Gamers – or, more often, game *servers* – can use remote attestation to force each player’s computer to cough up a true account of its configuration, including whether there are any cheats running on the computer that would give the player an edge. By design, owners of computers can’t override trusted computing modules, which means that even if you *want* to cheat, your computer will still rat you out.


MP3

(Image: Bernt Rostad, Elliott Brown, CC BY 2.0)

,

Planet DebianDirk Eddelbuettel: RcppFastAD 0.0.3 on CRAN: Updated

A new release 0.0.3 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release turns compilation to the C++20 standard as newer clang++ versions complained about a particular statement (it took to be C++20) when compiled under C++17. So we obliged.

The NEWS file for these two initial releases follows.

Changes in version 0.0.3 (2024-09-15)

  • The package now compiles under the C++20 standard to avoid a warning under clang++-18 (Dirk addressing #9)

  • Minor updates to continuous integration and badges have been made as well

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRaju Devidas: Setting a local test deployment of moinmoin wiki

~$ mkdir moin-test

~$ cd moin-test

~/d/moin-test►python3 -m venv .                                00:04

~/d/moin-test►ls                                        2.119s 00:04
bin/  include/  lib/  lib64@  pyvenv.cfg

~/d/moin-test►source bin/activate.fish                         00:04


~/d/moin-test►pip install --pre moin                 moin-test 00:04
Collecting moin
  Using cached moin-2.0.0b1-py3-none-any.whl.metadata (4.7 kB)
Collecting Babel>=2.10.0 (from moin)
  Using cached babel-2.16.0-py3-none-any.whl.metadata (1.5 kB)
Collecting blinker>=1.6.2 (from moin)
  Using cached blinker-1.8.2-py3-none-any.whl.metadata (1.6 kB)
Collecting docutils>=0.18.1 (from moin)
  Using cached docutils-0.21.2-py3-none-any.whl.metadata (2.8 kB)
Collecting Markdown>=3.4.1 (from moin)
  Using cached Markdown-3.7-py3-none-any.whl.metadata (7.0 kB)
Collecting mdx-wikilink-plus>=1.4.1 (from moin)
  Using cached mdx_wikilink_plus-1.4.1-py3-none-any.whl.metadata (6.6 kB)
Collecting Flask>=3.0.0 (from moin)
  Using cached flask-3.0.3-py3-none-any.whl.metadata (3.2 kB)
Collecting Flask-Babel>=3.0.0 (from moin)
  Using cached flask_babel-4.0.0-py3-none-any.whl.metadata (1.9 kB)
Collecting Flask-Caching>=1.2.0 (from moin)
  Using cached Flask_Caching-2.3.0-py3-none-any.whl.metadata (2.2 kB)
Collecting Flask-Theme>=0.3.6 (from moin)
  Using cached flask_theme-0.3.6-py3-none-any.whl
Collecting emeraldtree>=0.10.0 (from moin)
  Using cached emeraldtree-0.11.0-py3-none-any.whl
Collecting feedgen>=0.9.0 (from moin)
  Using cached feedgen-1.0.0-py2.py3-none-any.whl
Collecting flatland>=0.8 (from moin)
  Using cached flatland-0.9.1-py3-none-any.whl
Collecting Jinja2>=3.1.0 (from moin)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting markupsafe<=2.2.0 (from moin)
  Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting pygments>=1.4 (from moin)
  Using cached pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB)
Collecting Werkzeug>=3.0.0 (from moin)
  Using cached werkzeug-3.0.4-py3-none-any.whl.metadata (3.7 kB)
Collecting whoosh>=2.7.0 (from moin)
  Using cached Whoosh-2.7.4-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting pdfminer.six (from moin)
  Using cached pdfminer.six-20240706-py3-none-any.whl.metadata (4.1 kB)
Collecting passlib>=1.6.0 (from moin)
  Using cached passlib-1.7.4-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting sqlalchemy>=2.0 (from moin)
  Using cached SQLAlchemy-2.0.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.6 kB)
Collecting XStatic>=0.0.2 (from moin)
  Using cached XStatic-1.0.3-py3-none-any.whl.metadata (1.4 kB)
Collecting XStatic-Bootstrap==3.1.1.2 (from moin)
  Using cached XStatic_Bootstrap-3.1.1.2-py3-none-any.whl
Collecting XStatic-Font-Awesome>=6.2.1.0 (from moin)
  Using cached XStatic_Font_Awesome-6.2.1.1-py3-none-any.whl.metadata (851 bytes)
Collecting XStatic-CKEditor>=3.6.1.2 (from moin)
  Using cached XStatic_CKEditor-3.6.4.0-py3-none-any.whl
Collecting XStatic-autosize (from moin)
  Using cached XStatic_autosize-1.17.2.1-py3-none-any.whl
Collecting XStatic-jQuery>=1.8.2 (from moin)
  Using cached XStatic_jQuery-3.5.1.1-py3-none-any.whl
Collecting XStatic-jQuery-File-Upload>=10.31.0 (from moin)
  Using cached XStatic_jQuery_File_Upload-10.31.0.1-py3-none-any.whl
Collecting XStatic-svg-edit-moin>=2012.11.15.1 (from moin)
  Using cached XStatic_svg_edit_moin-2012.11.27.1-py3-none-any.whl
Collecting XStatic-JQuery.TableSorter>=2.14.5.1 (from moin)
  Using cached XStatic_JQuery.TableSorter-2.14.5.2-py3-none-any.whl.metadata (846 bytes)
Collecting XStatic-Pygments>=1.6.0.1 (from moin)
  Using cached XStatic_Pygments-2.9.0.1-py3-none-any.whl
Collecting lxml (from feedgen>=0.9.0->moin)
  Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (3.8 kB)
Collecting python-dateutil (from feedgen>=0.9.0->moin)
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting itsdangerous>=2.1.2 (from Flask>=3.0.0->moin)
  Using cached itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
Collecting click>=8.1.3 (from Flask>=3.0.0->moin)
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting pytz>=2022.7 (from Flask-Babel>=3.0.0->moin)
  Using cached pytz-2024.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting cachelib<0.10.0,>=0.9.0 (from Flask-Caching>=1.2.0->moin)
  Using cached cachelib-0.9.0-py3-none-any.whl.metadata (1.9 kB)
Collecting typing-extensions>=4.6.0 (from sqlalchemy>=2.0->moin)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting greenlet!=0.4.17 (from sqlalchemy>=2.0->moin)
  Using cached greenlet-3.1.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (3.8 kB)
Collecting charset-normalizer>=2.0.0 (from pdfminer.six->moin)
  Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting cryptography>=36.0.0 (from pdfminer.six->moin)
  Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (5.4 kB)
Collecting cffi>=1.12 (from cryptography>=36.0.0->pdfminer.six->moin)
  Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting six>=1.5 (from python-dateutil->feedgen>=0.9.0->moin)
  Using cached six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting pycparser (from cffi>=1.12->cryptography>=36.0.0->pdfminer.six->moin)
  Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Using cached moin-2.0.0b1-py3-none-any.whl (1.7 MB)
Using cached babel-2.16.0-py3-none-any.whl (9.6 MB)
Using cached blinker-1.8.2-py3-none-any.whl (9.5 kB)
Using cached docutils-0.21.2-py3-none-any.whl (587 kB)
Using cached flask-3.0.3-py3-none-any.whl (101 kB)
Using cached flask_babel-4.0.0-py3-none-any.whl (9.6 kB)
Using cached Flask_Caching-2.3.0-py3-none-any.whl (28 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached Markdown-3.7-py3-none-any.whl (106 kB)
Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
Using cached mdx_wikilink_plus-1.4.1-py3-none-any.whl (8.9 kB)
Using cached passlib-1.7.4-py2.py3-none-any.whl (525 kB)
Using cached pygments-2.18.0-py3-none-any.whl (1.2 MB)
Using cached SQLAlchemy-2.0.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
Using cached werkzeug-3.0.4-py3-none-any.whl (227 kB)
Using cached Whoosh-2.7.4-py2.py3-none-any.whl (468 kB)
Using cached XStatic-1.0.3-py3-none-any.whl (4.4 kB)
Using cached XStatic_Font_Awesome-6.2.1.1-py3-none-any.whl (6.5 MB)
Using cached XStatic_JQuery.TableSorter-2.14.5.2-py3-none-any.whl (20 kB)
Using cached pdfminer.six-20240706-py3-none-any.whl (5.6 MB)
Using cached cachelib-0.9.0-py3-none-any.whl (15 kB)
Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (141 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl (4.0 MB)
Using cached greenlet-3.1.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (626 kB)
Using cached itsdangerous-2.2.0-py3-none-any.whl (16 kB)
Using cached pytz-2024.2-py2.py3-none-any.whl (508 kB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl (4.9 MB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (479 kB)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: XStatic-svg-edit-moin, XStatic-Pygments, XStatic-JQuery.TableSorter, XStatic-jQuery-File-Upload, XStatic-jQuery, XStatic-Font-Awesome, XStatic-CKEditor, XStatic-Bootstrap, XStatic-autosize, XStatic, whoosh, pytz, passlib, typing-extensions, six, pygments, pycparser, markupsafe, Markdown, lxml, itsdangerous, greenlet, emeraldtree, docutils, click, charset-normalizer, cachelib, blinker, Babel, Werkzeug, sqlalchemy, python-dateutil, mdx-wikilink-plus, Jinja2, flatland, cffi, Flask, feedgen, cryptography, pdfminer.six, Flask-Theme, Flask-Caching, Flask-Babel, moin
Successfully installed Babel-2.16.0 Flask-3.0.3 Flask-Babel-4.0.0 Flask-Caching-2.3.0 Flask-Theme-0.3.6 Jinja2-3.1.4 Markdown-3.7 Werkzeug-3.0.4 XStatic-1.0.3 XStatic-Bootstrap-3.1.1.2 XStatic-CKEditor-3.6.4.0 XStatic-Font-Awesome-6.2.1.1 XStatic-JQuery.TableSorter-2.14.5.2 XStatic-Pygments-2.9.0.1 XStatic-autosize-1.17.2.1 XStatic-jQuery-3.5.1.1 XStatic-jQuery-File-Upload-10.31.0.1 XStatic-svg-edit-moin-2012.11.27.1 blinker-1.8.2 cachelib-0.9.0 cffi-1.17.1 charset-normalizer-3.3.2 click-8.1.7 cryptography-43.0.1 docutils-0.21.2 emeraldtree-0.11.0 feedgen-1.0.0 flatland-0.9.1 greenlet-3.1.0 itsdangerous-2.2.0 lxml-5.3.0 markupsafe-2.1.5 mdx-wikilink-plus-1.4.1 moin-2.0.0b1 passlib-1.7.4 pdfminer.six-20240706 pycparser-2.22 pygments-2.18.0 python-dateutil-2.9.0.post0 pytz-2024.2 six-1.16.0 sqlalchemy-2.0.34 typing-extensions-4.12.2 whoosh-2.7.4

~/d/moin-test[1]►pip install setuptools       moin-test 0.241s 00:06
Collecting setuptools
  Using cached setuptools-75.0.0-py3-none-any.whl.metadata (6.9 kB)
Using cached setuptools-75.0.0-py3-none-any.whl (1.2 MB)
Installing collected packages: setuptools
Successfully installed setuptools-75.0.0



~/d/moin-test►moin create-instance --full     moin-test 1.457s 00:06
2024-09-16 00:06:36,812 INFO moin.cli.maint.create_instance:76 Directory /home/raj/dev/moin-test already exists, using as wikiconfig dir.
2024-09-16 00:06:36,813 INFO moin.cli.maint.create_instance:93 Instance creation finished.
2024-09-16 00:06:37,303 INFO moin.cli.maint.create_instance:107 Build Instance started.
2024-09-16 00:06:37,304 INFO moin.cli.maint.index:51 Index creation started
2024-09-16 00:06:37,308 INFO moin.cli.maint.index:55 Index creation finished
2024-09-16 00:06:37,308 INFO moin.cli.maint.modify_item:166 Load help started
Item loaded: Home
Item loaded: docbook
Item loaded: mediawiki
Item loaded: OtherTextItems/Diff
Item loaded: WikiDict
Item loaded: moin
Item loaded: moin/subitem
Item loaded: html/SubItem
Item loaded: moin/HighlighterList
Item loaded: MoinWikiMacros/Icons
Item loaded: InclusionForMoinWikiMacros
Item loaded: TemplateSample
Item loaded: MoinWikiMacros
Item loaded: rst/subitem
Item loaded: OtherTextItems/IRC
Item loaded: rst
Item loaded: creole/subitem
Item loaded: Home/subitem
Item loaded: OtherTextItems/CSV
Item loaded: images
Item loaded: Sibling
Item loaded: html
Item loaded: markdown
Item loaded: creole
Item loaded: OtherTextItems
Item loaded: OtherTextItems/Python
Item loaded: docbook/SubItem
Item loaded: OtherTextItems/PlainText
Item loaded: MoinWikiMacros/MonthCalendar
Item loaded: markdown/Subitem
Success: help namespace help-en loaded successfully with 30 items
2024-09-16 00:06:46,258 INFO moin.cli.maint.modify_item:166 Load help started
Item loaded: video.mp4
Item loaded: archive.tar.gz
Item loaded: audio.mp3
Item loaded: archive.zip
Item loaded: logo.png
Item loaded: cat.jpg
Item loaded: logo.svg
Success: help namespace help-common loaded successfully with 7 items
2024-09-16 00:06:49,685 INFO moin.cli.maint.modify_item:338 Load welcome page started
2024-09-16 00:06:49,801 INFO moin.cli.maint.modify_item:347 Load welcome finished
2024-09-16 00:06:49,801 INFO moin.cli.maint.index:124 Index optimization started
2024-09-16 00:06:51,383 INFO moin.cli.maint.index:126 Index optimization finished
2024-09-16 00:06:51,383 INFO moin.cli.maint.create_instance:114 Full instance setup finished.
2024-09-16 00:06:51,383 INFO moin.cli.maint.create_instance:115 You can now use "moin run" to start the builtin server.



~/d/moin-test►ls                             moin-test 15.295s 00:06
bin/      intermap.txt  lib64@      wiki/        wikiconfig.py
include/  lib/          pyvenv.cfg  wiki_local/



~/d/moin-test►MOINCFG=wikiconfig.py                  moin-test 00:07
fish: Unsupported use of &apos=&apos. In fish, please use &aposset MOINCFG wikiconfig.py&apos.

~/d/moin-test[123]►set MOINCFG wikiconfig.py         moin-test 00:07


~/d/moin-test[123]►moin account-create --name test --email test@test.tld --password test123
Password not acceptable: For a password a minimum length of 8 characters is required.
2024-09-16 00:08:19,106 WARNING moin.utils.clock:53 These timers have not been stopped: total




~/d/moin-test►moin account-create --name test --email test@test.tld --password this-is-a-password
2024-09-16 00:08:43,798 INFO moin.cli.account.create:49 User c3608cafec184bd6a7a1d69d83109ad0 [&apostest&apos] test@test.tld - created.
2024-09-16 00:08:43,798 WARNING moin.utils.clock:53 These timers have not been stopped: total



~/d/moin-test►moin run --host 0.0.0.0 --port 5000 --no-debugger --no-reload
 * Debug mode: off
2024-09-16 00:09:26,146 INFO werkzeug:97 WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.1.2:5000
2024-09-16 00:09:26,146 INFO werkzeug:97 Press CTRL+C to quit

Cryptogram Australia Threatens to Force Companies to Break Encryption

In 2018, Australia passed the Assistance and Access Act, which—among other things—gave the government the power to force companies to break their own encryption.

The Assistance and Access Act includes key components that outline investigatory powers between government and industry. These components include:

  • Technical Assistance Requests (TARs): TARs are voluntary requests for assistance accessing encrypted data from law enforcement to teleco and technology companies. Companies are not legally obligated to comply with a TAR but law enforcement sends requests to solicit cooperation.
  • Technical Assistance Notices (TANs): TANS are compulsory notices (such as computer access warrants) that require companies to assist within their means with decrypting data or providing technical information that a law enforcement agency cannot access independently. Examples include certain source code, encryption, cryptography, and electronic hardware.
  • Technical Capability Notices (TCNs): TCNs are orders that require a company to build new capabilities that assist law enforcement agencies in accessing encrypted data. The Attorney-General must approve a TCN by confirming it is reasonable, proportionate, practical, and technically feasible.

It’s that final one that’s the real problem. The Australian government can force tech companies to build backdoors into their systems.

This is law, but near as anyone can tell the government has never used that third provision.

Now, the director of the Australian Security Intelligence Organisation (ASIO)—that’s basically their FBI or MI5—is threatening to do just that:

ASIO head, Mike Burgess, says he may soon use powers to compel tech companies to cooperate with warrants and unlock encrypted chats to aid in national security investigations.

[…]

But Mr Burgess says lawful access is all about targeted action against individuals under investigation.

“I understand there are people who really need it in some countries, but in this country, we’re subject to the rule of law, and if you’re doing nothing wrong, you’ve got privacy because no one’s looking at it,” Mr Burgess said.

“If there are suspicions, or we’ve got proof that we can justify you’re doing something wrong and you must be investigated, then actually we want lawful access to that data.”

Mr Burgess says tech companies could design apps in a way that allows law enforcement and security agencies access when they request it without comprising the integrity of encryption.

“I don’t accept that actually lawful access is a back door or systemic weakness, because that, in my mind, will be a bad design. I believe you can ­ these are clever people ­ design things that are secure, that give secure, lawful access,” he said.

We in the encryption space call that last one “nerd harder.” It, and the rest of his remarks, are the same tired talking points we’ve heard again and again.

It’s going to be an awfully big mess if Australia actually tries to make Apple, or Facebook’s WhatsApp, for that matter, break its own encryption for its “targeted actions” that put every other user at risk.

Planet DebianRussell Coker: Kogan AX1800 Wifi6 Mesh

I previously blogged about the difficulties in getting a good Wifi mesh network setup [1].

I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.

I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.

I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.

I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.

Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.

The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.

For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.

The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.

Conclusion

The performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.

So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.

365 TomorrowsThe Fourth Initiation

Author: David Dumouriez The fourth initiation, if you got that far, was where it started. Where you found out what you weren’t. The first was just a basic exercise in establishing the proper mindset. Donning the skins. Adopting that grinning mask. And, let’s face it, if you couldn’t do that, you had no right being […]

The post The Fourth Initiation appeared first on 365tomorrows.

,

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at eCrime 2024 in Boston, Massachusetts, USA. The event runs from September 24 through 26, 2024, and my keynote is at 8:45 AM ET on the 24th.
  • I’m briefly speaking at the EPIC Champion of Freedom Awards in Washington, DC on September 25, 2024.
  • I’m speaking at SOSS Fusion 2024 in Atlanta, Georgia, USA. The event will be held on October 22 and 23, 2024, and my talk is  at 9:15 AM ET on October 22, 2024.

The list is maintained on this page.

Planet DebianEvgeni Golov: Fixing the volume control in an Alesis M1Active 330 USB Speaker System

I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.

They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.

Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.

So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?

Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.

Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?

Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).

right Alesis M1Active 330 USB speaker with a Makita wrench where the volume knob is

Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.

Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.

right Alesis M1Active 330 USB speaker open

Great, this was the easy part!

The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.

Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!

Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!

Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!

PS: Don't operate this thing open to try it out. 230V are dangerous!

Cryptogram My TedXBillings Talk

Over the summer, I gave a talk about AI and democracy at TedXBillings. The recording is live.

Please share. I’m hoping for more than 200 views….

365 TomorrowsPopsicle

Author: Kevin Eric Paul “Hey. Mister,” a melodious voice called to me. I kept my eyes closed for a moment and did not respond. Confusion. Anxiety. Dread. And a gentle, warm breeze. Bright light penetrating my eyelids. Where am I? I thought. What the devil is going on? “Mister. Hey.” I felt a soft hand […]

The post Popsicle appeared first on 365tomorrows.