Planet Russell

,

Planet DebianColin Watson: Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

Planet DebianGuido Günther: Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise):

phosh

  • Fix swapped arguments in ABI check (MR)
  • Sync packaging with Debian so testing packages becomes easier (MR)
  • Fix crash when primary output goes away (MR)
  • More consistent button press feedback (MR
  • Undraft the lockscreen wallpaper branch (MR) - another ~2y old MR out of the way.
  • Indicate ongoing WiFi scans (MR)
  • Limit ABI compliance check to public headers (MR)
  • Document most gsettings in a manpage (MR)
  • (Hopefully) make integration test more robust (MR)
  • Drop superfluous build invocation in CI by fixing the missing dep (MR)
  • Fix top-panel icon size (MR)
  • Release 0.46~rc1, 0.46.0
  • Simplify adding new symbols (MR)
  • Fix crash when taking screenshot on I/O starved system (MR)
  • Split media-player and mpris-manger (MR)
  • Handle Cell Broadcast notification categories (MR)

phoc

  • xwayland: Allow views to use opacity: (MR)
  • Track wlroots 0.19.x (MR)
  • Initial support for workspaces (MR)
  • Don't crash when gtk-layer-shell wants to reposition popups (MR)
  • Some cleanups split out of other MRs (MR)
  • Release 0.46~rc1, 0.46.0
  • Add meson dist job and work around meson not applying patches in meson dist (MR, MR)
  • Small render to allow Vulkan renderer to work (MR)
  • Fix possible crash when closing applications (MR)
  • Rename XdgSurface to XdgToplevel to prevent errors like the above (MR)

phosh-osk-stub

  • Make switching into (and out of) symbol2 level more pleasant (MR)
  • Simplify UI files as prep for the GTK4 switch (MR)
  • Release 0.46~rc1, 0.46.0)

phosh-mobile-settings

  • Format meson files (MR)
  • Allow to set lockscren wallpaper (MR)
  • Allow to set maximum haptic feedback (MR)
  • Release 0.46~rc1, 0.46.0
  • Avoid warnings when running CI/autopkgtest (MR)

phosh-tour

pfs

  • Add search when opening files (MR)
  • Show loading state when opening folders (MR)
  • Move demo to its own folder (MR)
  • Release 0.0.2

xdg-desktop-portal-gtk

  • Add some support for v2 of the notification portal (MR)
  • Make two function static (MR)

xdg-desktop-portal-phosh

  • Add preview for lockscreen wallpapers (MR)
  • Update to newer pfs to support search (MR)
  • Release 0.46~rc1), 0.46.0
  • Add initial support for notification portal v2 (MR) thus finally allowing flatpaks to submit proper feedback.
  • Style consistency (MR, MR)
  • Add Cell Broadcast categories (MR)

meta-phosh

  • Small release helper tweaks (MR)

feedbackd

  • Allow for vibra patterns with different magnitudes (MR)
  • Allow to tweak maximum haptic feedback strength (MR)
  • Split out libfeedback.h and check more things in CI (MR)
  • Tweak haptic in default profile a bit (MR)
  • dev-vibra: Allow to use full magnitude range (MR)
  • vibra-periodic: Use [0.0, 1.0] as ranges for magnitude (MR)
  • Release 0.8.0, 0.8.1)
  • Only cancel feedback if ever inited (MR)

feedbackd-device-themes

  • Increase button feedback for sarge (MR)

gmobile

  • Release 0.2.2
  • Format and validate meson files (MR)

livi

  • Don't emit properties changed on position changes (MR)

Debian

  • libmbim: Update to 1.31.95 (MR)
  • libmbim: Upload to unstable and add autopkgtest (MR)
  • libqmi: Update to 1.35.95 (MR)
  • libqmi: Upload to unstable and add autopkgtest (MR)
  • modemmanager: Update to 1.23.95 to experimental and add autopkgtest (MR)
  • modemmanager: Upload to unstable (MR)
  • modemmanager: Add missing nodoc build deps (MR)
  • Package osmo-cbc: (Repo)
  • feedbackd: Depend on adduser (MR)
  • feedbackd: Release 0.8.0, 0.8.1
  • feedbackd-device-themes: Release 0.8.0, 0.8.1
  • phosh: Release 0.46~rc1, 0.46.0
  • phoc: Release 0.46~rc1, 0.46.0
  • phosh-osk-stub: Release 0.46~rc1, 0.46.0
  • xdg-desktop-portal-phosh: Release 0.46~rc1, 0.46.0
  • phosh-mobile-settings: Release 0.46~rc1, 0.46.0, fix autopkgtest
  • phosh-tour: Release 0.46.0
  • gmobile: Release 0.2.2-1
  • gmobile: Ensure udev rules are applied on updates (MR)

git-buildpackage

  • Ease creating packages from scratch and document that better (MR, Testcase MR)

feedbackd-device-themes

  • Tweak some haptic for oneplus,fajita (MR)
  • Drop superfluous periodic feedbacks and cleanup CI (MR)

wlroots

  • xwm: Allow to set opacity (MR)

ModemManager

  • Fix typos (MR)
  • Add support for setting channels via libmm-glib and mmcli (MR)

Tuba

  • Set input-hint for OSK word completion (MR)

xdg-spec

  • Propose _NET_WM_WINDOW_OPACITY (which is around since ages) (MR)

gnome-calls

  • Help startup ordering (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh: Remove usage of phosh_{app_grid, overview}handlesearch (MR)
  • phosh: app-grid-button: Prepare for GTK 4 by using gestures and other migrations (MR) - merged
  • phosh: valign search results (MR) - merged
  • phosh: top-panel: Hide setting's details on fold (MR) - merged
  • phosh: Show frame with an animation (MR) - merged
  • phosh: Use gtk_widget_set_visible (MR) - merged
  • phosh: Thumbnail aspect ration tweak (MR) - merged
  • phosh: Add clang/llvm ci step (MR)
  • mobile-broadband-provider-info: Bild APN (MR) - merged
  • iio-sensor-proxy: Buffer driver probing fix (MR) - merged
  • iio-sensor-proxy: Double free (MR) - merged
  • debian: Autopkgtests for ModemManager (MR)
  • debian: gitignore: phosh-pim debian build directory (MR)
  • debian: Better autopkgtests for MM (MR) - merged
  • feedbackd: tests: Depend on daemon for integration test (MR) - merged
  • libcmatrix: Various improvements (MR)
  • gmobile/hwdb: Add Sargo (MR) - merged
  • gmobile/hwdb: Add xiaomi-daisy (MR) - merged
  • gmobile/hwdb: Add SHIFT6mq (MR) - merged
  • meta-posh: Add reproducibility check (MR) - merged
  • git-buildpackage: Dependency fixes (MR) - merged
  • git-buildpackage: Rename tracking (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Worse Than FailureCodeSOD: A Ruby Encrusted Footgun

Many years ago, JP joined a Ruby project. This was in the heyday of Ruby, when every startup on Earth was using it, and if you weren't building your app on Rails, were you even building an app?

Now, Ruby offers a lot of flexibility. One might argue that it offers too much flexibility, especially insofar as it permits "monkey patching": you can always add new methods to an existing class, if you want. Regardless of the technical details, JP and the team saw that massive flexibility and said, "Yes, we should use that. All of it!"

As these stories usually go, that was fine- for awhile. Then one day, a test started failing because a class name wasn't defined. That was already odd, but what was even odder is that when they searched through the code, that class name wasn't actually used anywhere. So yes, there was definitely no class with that name, but also, there was no line of code that was trying to instantiate that class. So where was the problem?

def controller_class(name)
  "#{settings.app_name.camelize}::Controllers".constantize.const_get("#{name.to_s.camelize}")
end

def model_class(name)
  "#{settings.app_name.camelize}".constantize.const_get("#{name.to_s.camelize}")
end

def resource_class(name)
  "#{settings.app_name.camelize}Client".constantize.const_get("#{name.to_s.camelize}")
end

It happened because they were dynamically constructing the class names from a settings field. And not just in this handful of lines- this pattern occurred all over the codebase. There were other places where it referenced a different settings field, and they just hadn't encountered the bug yet, but knew that it was only a matter of time before changing a settings file was going to break more functionality in the application.

They wisely rewrote these sections to not reference the settings, and dubbed the pattern the "Caramelize Pattern". They added that to their coding standards as a thing to avoid, and learned a valuable lesson about how languages provide footguns.

Since today's April Fool's Day, consider the prank the fact that everyone learned their lesson and corrected their mistakes. I suppose that has to happen at least sometimes.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsKnown

Author: Majoki What was I thinking? Tiasmet could not put the thought—the picture—out of her head. The chipmunk with its shark-blank eyes and its panicked keening as the tictocs methodically circled and closed on it. The chipmunk should have been able to easily dash away. It was ten times the size of a tic or […]

The post Known appeared first on 365tomorrows.

Planet DebianMichael Ablassmeier: qmpbackup 0.46 - add image fleecing

I’ve released qmpbackup 0.46 which now utilizes the image fleecing technique for backup.

Usually, during backup, Qemu will use a so called copy-before-write filter so that data for new guest writes is sent to the backup target first, the guest write blocks until this operation is finished.

If the backup target is flaky, or becomes unavailable during backup operation, this could lead to high I/O wait times or even complete VM lockups.

To fix this, a so called “fleecing” image is introduced during backup being used as temporary cache for write operations by the guest. This image can be placed on the same storage as the virtual machine disks, so is independent from the backup target performance.

The documentation on which steps are required to get this going, using the Qemu QMP protocol is, lets say.. lacking..

The following examples show the general functionality, but should be enhanced to use transactions where possible. All commands are in qmp-shell command format.

Lets start with a full backup:

# create a new bitmap
block-dirty-bitmap-add node=disk1 name=bitmap persistent=true
# add the fleece image to the virtual machine (same size as original disk required)
blockdev-add driver=qcow2 node-name=fleecie file={"driver":"file","filename":"/tmp/fleece.qcow2"}
# add the backup target file to the virtual machine
blockdev-add driver=qcow2 node-name=backup-target-file file={"driver":"file","filename":"/tmp/backup.qcow2"}
# enable the copy-before-writer for the first disk attached, utilizing the fleece image
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie
# "blockdev-replace": make the copy-before-writer filter the major device
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter backing the copy-before-writer
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# create a full backup
blockdev-backup device=snapshot-backup-source target=backup-target-file sync=full job-id=test

[ wait until block job finishes]

# remove the snapshot access filter from the virtual machine
blockdev-del node-name=snapshot-backup-source
# switch back to the regular disk
qom-set path=/machine/unattached/device[20] property=drive value=node-disk1
# remove the copy-before-writer
blockdev-del node-name=cbw
# remove the backup-target-file
blockdev-del node-name=backup-target-file
# detach the fleecing image
blockdev-del node-name=fleecie

After this process, the temporary fleecing image can be deleted/recreated. Now lets go for a incremental backup:

# add the fleecing and backup target image, like before
blockdev-add driver=qcow2 node-name=fleecie file={"driver":"file","filename":"/tmp/fleece.qcow2"}
blockdev-add driver=qcow2 node-name=backup-target-file file={"driver":"file","filename":"/tmp/backup-incremental.qcow2"}
# add the copy-before-write filter, but utilize the bitmap created during full backup
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie bitmap={"node":"disk1","name":"bitmap"}
# switch device to the copy-before-write filter
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# merge the bitmap created during full backup to the snapshot-access device so
# the backup operation can access it. (you better transaction here)
block-dirty-bitmap-add node=snapshot-backup-source name=bitmap
block-dirty-bitmap-merge node=snapshot-backup-source target=bitmap bitmaps=[{"node":"disk1","name":"bitmap"}]
# create incremental backup ((you better transaction here)
blockdev-backup device=snapshot-backup-source target=backup-target-file job-id=test sync=incremental bitmap=bitmap

 [ wait until backup has finished ]
 [ cleanup like before ]

# clear the dirty bitmap (you better transaction here)
block-dirty-bitmap-clear node=disk1 name=bitmap

,

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.15 on CRAN: Several Refinements

bloomberg terminal

Version 0.3.16 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the sixteenth release since the package first appeared on CRAN in 2016. It contains several enhancements. Two contributed PRs improve an error message, and extended connection options. We cleaned up a bit of internal code. And this release also makes the build conditional on having a valid build environment. This has been driven by the fact CRAN continues to builder under macOS 13 for x86_64, but Bloomberg no longer supplies a library and headers. And our repeated requests to be able to opt out of the build were, well, roundly ignored. So now the builds will succeed, but on unviable platforms such as that one we will only offer ‘empty’ functions. But no more build ERRORS yelling at us for three configurations.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.16 (2025-03-31)

  • A quota error message is now improved (Rodolphe Duge in #400)

  • Convert remaining throw into Rcpp::stop (Dirk in #402 fixing #401)

  • Add optional appIdentityKey argument to blpConnect (Kai Lin in #404)

  • Rework build as function of Blp library availability (Dirk and John in #406, #409, #410 fixing #407, #408)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.24 on CRAN: Minor Polish

A new maintenance release 0.4.24 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an both an upstream API update affecting one function, and an update to our use of the C API of R, also in one function. Nothing user-facing, and no surprises expected.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.24 (2025-03-31)

  • Add bindings to EnumValueDescriptor::name (Mike Kruskal in #108)

  • Replace EXTPTR_PTR with R_ExternalPtrAddr (Dirk)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

LongNowWhy the Physics Underlying Life is Fundamental and Computation is Not

💡
JOIN US IN PERSON AND ONLINE for Sara Imari Walker's Long Now Talk, An Informational Theory of Life, on April 1, 02025 at 7 PM PT at the Cowell Theater in San Francisco.
Why the Physics Underlying Life is Fundamental and Computation is Not

Life is undeniably real. It defines the very boundary of our reality because it is what we are. Yet despite this fundamental presence, the nature of life has defied precise scientific explanation. While we recognize “life” colloquially and can characterize its more familiar biological forms, we struggle with frontier questions: how does life emerge from non-life? How can we engineer new forms of life? How might we recognize artificial or alien life? What are the sources of novelty and creativity that underlie biology and technology? 

These challenges mirror the limits of our ancestors’ understanding of gravity. They knew objects fell to Earth without understanding why. They observed just a few stars wandering across their night sky and lacked explanations for their motion relative to all the other stars, which remained fixed. It required technological advances — precise mechanical clocks that allowed Tycho Brahe to record planetary motions, Galileo Galilei’s concept of inertial mass, and Isaac Newton’s conception of universal laws — to develop our modern explanation of gravity. While we may be tempted to point to a particular generation that made the conceptual leaps necessary, this transformation took thousands of years of technological and intellectual development before eventually giving rise to theoretical physics as an explanatory framework. The development of physics was based on the premise that reality is comprehensible through abstract descriptions that unify our observations and allow us deeper explanations than our immediate sense perception might otherwise permit. 

Our ability to explain gravity fundamentally changed how we interact with our world. With laws of gravitation, we launch satellites, visit distant worlds, and better understand our place in the cosmos. So too might an explanatory framework for life transform our future.

We now sit at an interesting point in history: one in which it is perhaps evident that we have sufficient technology to understand “life,” and according to some we may even have examples of artificial life and intelligence, but we have not yet landed on the conceptual framing and theoretical abstractions that will allow us to see what this means as clearly as we now see gravity. That is, we lack a formal language to talk about life. 

Life versus Computation

“Life” has historically been difficult to formalize at this deep level of abstraction because of its complexity. Darwin and his contemporaries were successful in explaining some portion of life because their goal was not to inventory the full complexity of living forms, but merely to explain how it is that one form can change into another, and why this should lead to a diversity of forms, some of them more complex than others.  It was not until the advent of the theory of computation roughly 75 years later that it became possible to systematically formalize some notions of complexity (although earlier individual examples of the difficulty of a computation date much earlier). Some thought then, and still think now, that such formalization might be relevant to understanding life. In the historical progression of ideas, proceeding over many many generations, the theory of computation may prove an important step, but not the final or most important one.   

The theory of computation, and its derivative concepts of computational complexity, were not explicitly developed to solve the problem of life, nor were they even devised as a formal approach to life or to physical systems. It is important to maintain this distinction because many alive now confuse computation not only with physical reality, but also more specifically with life itself. In human histories, our best languages for describing the frontier of what we understand are often embedded in the technologies of our time; however, the truly fundamental breakthroughs are often those that allow us to see beyond the current technological horizon. 

The challenge with “computation” begins with the vast spaces we must consider. In chemical space — defined as the space of all possible molecules — there are an estimated 1060 possible molecules composed of up to 30 atoms using only the elements carbon, oxygen, nitrogen, and sulfur. This is only a very small subset of all molecules we might imagine, and cheminformaticians who study chemical space have never been able to even estimate its full size. We cannot explore all possible states computationally. You may at first think this is solely a limitation of our computers, but in fact it is a limitation on reality itself. Given all available compute time and resources right now on planet Earth, it would not be possible to generate a foundation model for all possible molecules or their functional properties. But even more revealing about the physical barriers is how, if given all available time and resources in the entire universe, it would not be possible to construct every possible molecule either. And, because chemistry makes things like biological forms, which evolve into technological forms, the limitations at the base layer of chemistry indicate that our universe may be fundamentally unable to explore all possible options even in infinite time. The technical term for this is to say that our universe is non-ergodic: it cannot visit all possible states. But even this terminology is not right because it assumes that the state-space exists at all. If nothing inside the universe can generate the full space, in what sense can we say it exists? 

A much more physical interpretation, and one that keeps all descriptions internal to the universe they describe, is to assume that things do not exist until the universe generates them. Discussing all possible molecules is just one example, but the idea extends to much more familiar things like technology: even with our most advanced generative models, we could never even imagine all possible technologies, so how could we possibly create them all? This feature of living in a universe that is self-constructing is one clue that reality cannot be computational. The fact that we can imagine possibilities that cannot exist all at once is more telling about us as constructive, creative systems within the universe than it is of a landscape of possibilities “out there” that are all equally real. 

This raises deep questions about computational approaches to life, which itself emerges from a backward view of the space of chemistry that the universe can explore; that is, only physical systems that have evolved to be like us can ask such questions about how they came to be. A challenge in the field of chemistry relevant to the issue of defining life is how one can identify molecules with function, that is, ones that have some useful role in sustaining the persistence of a living entity. This is a frontier research area in artificial intelligence-driven chemical design and drug discovery and in questions about biological and machine agency. But function is a post-selected concept. Post-selection is a concept from probability theory, where one conditions the probability space on the occurrence of a given event after the event occurs. “Function” is a concept that can only be defined relative to what already exists and is, therefore, historically contingent. 

A key challenge then emerges based on the limits of our models: we can only calculate the size of the space evolution selects functional structures within by imposing tight restrictions on the space of interest (post-selecting) so we can bound the size of the space to one we can compute. It may be that the only sense in which this counterfactual space is “real” is within the confines of our models of it. Chemical space cannot be computed, nor can the full space be experimentally explored, making probability assignments across all molecules not only impossible but unphysical; there will always be structure outside our models which could be a source for novelty. To stress the point here, I am not indicating this as a limitation on our models themselves, but on reality itself and, by extension, on what laws of physics could possibly explain how life emerges from such large combinatorial spaces. 

Analogies to the theory of computation do not fit, because computation is fundamentally the wrong paradigm for understanding life. But if we were to use such an analogy, it would be like predicting the output of programs that have not yet been run. We know from the very foundations of the theory of computation that this kind of forward-looking algorithm runs into epistemologically uncertain territory. A prime example is the halting problem, and related proofs that one cannot in general determine whether a given program will terminate and produce an output or run forever. One could make a machine that could describe this situation (what is called an oracle) and solve the halting problem in a specific case, but then the oracle itself would introduce new halting problems. I could assume infinity is real and there will always be a system that can describe another, but even this would run into new issues of uncomputability. New uncomputable things lurk no matter how you patch your system to account for other uncomputable things. Furthermore, infinity is a mathematical concept that itself may not correspond to a physical reality beyond the boundaries of the representational forms of the external world constructed within the physical architecture of human minds and human-derived technologies. 

Complexity, in a computational sense of the word, describes the length of the shortest computer program that produces a given output — and it is also generally uncomputable. More important for physics is that it is also not measurable. We might try to approximate complexity with something computable, but this will depend on our choice of language and, therefore, is an observer-dependent quantity and not a good candidate for physical law. If we assume there is a unique shortest program, we must assume infinity is real to do so, and we have again introduced something non-physical. I am advocating that we take the fact that we live in a finite universe with finite resources and finite time seriously, and construct our theories accordingly — particularly in accounting for historical contingency as a fundamental explanation. We need to take seriously our finite, self-constructing universe because this will allow us to embed ideas about life and open-ended creativity into physics, and in turn explain much more about the universe we actually live in. Among the most important aspects of physics is metrology — the science of measurement — because it allows standardization and empirical testing of theory.  It also allows us to define what we consider to be “laws of physics” — laws like those underlying gravitation, which we assume to be independent of the observer or measuring device. Every branch of physics developed to date rests on a foundation of abstract representations built from empirical measurement; it is this process that allows us to see beyond the confines of our own minds. 

For example, in the foundations of physics, we talk about how laws of physics are invariant to an observer’s frame of reference. Einstein’s work on relativity is exemplary in this regard: when experiments showed the speed of light yielded the same value regardless of the measuring instrument’s motion, Einstein equated the speed of light to a law of physics using the principle of invariance. This principle is important because if something is invariant, it does not depend on what the observer is doing; they will always measure it the same way. Einstein’s peers were not willing to take the measurement at face value. Many assumed the conception that the speed of light could change with the observer was correct, consistent with other sense perceptions of the world, and therefore that the measurements must be wrong. They assumed something must be missing from the physical measurements, like the presence of an ether (a substance hypothesized to fill space to explain the data). Indeed, they were missing something physical, but it was because they assumed their current abstractions were correct, and did not take the measurement seriously enough to change their ideas of what was physically real. The invariance of the speed of light had critically important consequences because following this idea to its logical conclusion (what Einstein did in developing special relativity) indicates that simultaneity (the measuring of events happening at the same “time”) and space are relative, and these insights have subsequently been confirmed by other experiments and observations. This example highlights two important features of physical laws: they are grounded in measurement (confirming they exist beyond how our minds label the world) and they are invariant with respect to measurement.

Assembly Theory and the Physics of Life

As an explanation for the physics underlying what we call “life,” my colleagues and I are developing a new approach called assembly theory. Assembly theory as a theory of physics is built on the idea that time is fundamental (you might call it causation) and as a consequence historical contingency is a real physical feature of our universe. The past is deterministic, but the future is undetermined until it happens simply because the universe has yet to construct itself into the future (and the possibility space is so big it cannot exist until it happens). This may seem a radical step, so how did we get here from thinking about life? 

We started with the question of how one might measure the emergence of complex molecules from unconstrained chemical systems. The question was easy to state: how complex does something need to be such that we might say only a living thing can produce it? We were interested in this because we work on the problem of understanding how life arises from non-life, and this requires some way of quantifying the transition from abiotic to living systems. This led to the development of a complexity measure, the assembly index, which my colleague Lee Cronin at the University of Glasgow originally developed from thought experiments on the measurement and physical structure of molecules.

Why the Physics Underlying Life is Fundamental and Computation is Not
ac, Assembly theory (AT) is generalizable to different classes of objects, illustrated here for three different general types. a, Assembly pathway to construct diethyl phthalate molecule considering molecular bonds as the building blocks. The figure shows the pathway starting with the irreducible constructs to create the molecule with assembly index 8. b, Assembly pathway of a peptide chain by considering building blocks as strings. Left, four amino acids as building blocks. Middle, the actual object and its representation as a string. Right, assembly pathway to construct the string. c, Generalized assembly pathway of an object comprising discrete components.1

The idea is startlingly simple. The assembly index is formalized as the minimum number of steps to make an object, starting from elementary building blocks, and re-using already assembled parts. For molecules, these parts and operations are chemical bonds. This point on bonds is important: assembly theory uses as its natural language the physical constraints intrinsic to the objects it describes, which can be probed by another system, such as a measuring device. However, we also regard that any mathematical language we use to describe the physical world is not the physical world. What we are looking for is a language that at least allows us to capture the invariant properties of the objects under study, because we are after a law of physics that describes life. We consider the assembly index to represent the minimum causation required to form the object, and this is, in fact, independent of how we label the specific minimum steps. Instead, what it captures is that there is a minimum number of ordered structures necessary for the given structure to come to exist. What the assembly index captures is that causation is a real physical property, automatically implying there is an ordering to what can exist, and that objects are formed in a historically contingent path. This raises the possibility that we may be able to measure the physical complexity of a system, even if it is not possible to compute it. 

Assembly theory’s two observables — assembly index and copy number — provide a generalized quantification of the selective causation necessary to produce an observed configuration of objects. Copy number is countable; it is how many of a given object you observe. Our conjecture is that there is a threshold for life, because objects with high assembly indices do not form in high (detectable) numbers of copies in the absence of life and selective processes. This has been confirmed by experimental tests of assembly theory for molecular life detection.  If we return to the idea of the vastness of chemical space, we can see why this idea is important. If the physics of our universe operated by exhaustive search, we would not exist because there are simply too many possible configurations of matter. What the physics of life indicates is the existence of historically contingent trajectories, where structures made in the past can be used to further elaborate into the space of more complex objects. Assembly theory suggests a phase transition between non-life (breadth-based search of physical structures) and life (depth-first search of physical structures), where the latter is possible because structures the universe has already generated can be used again. Underlying this is an absolute causal structure where complex objects reside, which we call the assembly space. If one assumes everything is possible, and the universe can really do it all, you will entirely miss the structure underlying life and what really gets to exist, and why. 

Determined Pasts, Non-Determinable Futures

An important distinction emerges from the physics of life: you cannot compute the future, but you can compute the past. Assembly theory works precisely because it starts from observed objects and allows reconstructing an invariant, minimum causal ordering for how hard it is for the universe to generate that object through its measurement. This allows us to talk about complexity related to life in an objective way that we expect — if the theory passes the trial and fire of scientific consensus — will play a role like other invariant quantities in physics. This fundamentally differs from computational approaches that depend on the “machine” (or observer), and it builds on the one unique thing theoretical physics has been able to offer the world: the ability to build abstractions that reach deeper than how our brains label data to describe the world. 

By taking measurement in science seriously and recognizing how our theories of physics are built from measurement, assembly theory offers a lens through which we might finally understand life as fundamental — not as a computation to be simulated but as a physical reality to be measured. In this view, life is not merely a special case of computation but something more fundamental: a physical reality that can be measured, quantified, and understood through invariant physical laws rather than observer-dependent computations. This leads to the startling realization that one of the most important features of life is that it produces a set of future states that are not computable, even in principle. This means a paradigm for accurately understanding intelligence, consciousness, and decision making is intrinsically missing in our current science that takes as its foundation the idea that everything can happen and everything can be modeled. This does not mean that life will never be understandable as a purely physical process; it simply points to the fact we are missing the required fundamental physics to be able explain life in a universe that has a future horizon that is inherently undetermined. 

The application of assembly theory in physics introduces contingency at a fundamental level, explaining how the past structures some of the future but not all of it. Life takes inert matter that is predictable and turns it into matter that is unpredictable because of the vast number of possibilities in the phenomenon of evolution, revealing selection as a kind of force that is responsible for the production of complexity in the universe. Life, not computation, unlocks our non-deterministic future. Only by looking beyond our current technological moment to the next technologies creating new life forms will we be able to understand what our future could hold. 

Acknowledgments

Many of the ideas discussed herein come from collaborative work with Leroy Cronin.

Notes

1. Figure and caption reproduced from Sharma, A., Czégel, D., Lachmann, M. et al. Assembly theory explains and quantifies selection and evolution. Nature 622, 321–328 (02023) under a CC BY 4.0 license. https://doi.org/10.1038/s41586-023-06600-9 .

Sara Imari Walker is the author of Life As No One Knows It: The Physics of Life’s Emergence (Riverhead Books, 02024) and will be speaking at Long Now on April 1, 02025.

Planet DebianRussell Coker: Links March 2025

Anarcat’s review of Fish is interesting and shows some benefits I hadn’t previously realised, I’ll have to try it out [1].

Longnow has an insightful article about religion and magic mushrooms [2].

Brian Krebs wrote an informative artivle about DOGE and the many security problems that it has caused to the US government [3].

Techdirt has an insightful article about why they are forced to become a democracy blog after the attacks by Trump et al [4].

Antoine wrote an insightful blog post about the war for the Internet and how in many ways we are losing to fascists [5].

Interesting story about people working for free at Apple to develop a graphing calculator [6]. We need ways for FOSS people to associate to do such projects.

Interesting YouTube video about a wiki for building a cheap road legal car [7].

Interesting video about powering spacecraft with Plutonion 238 and how they are running out [8].

Interesting information about the search for mh370 [9]. I previously hadn’t been convinced that it was hijacked but I am now.

The EFF has an interesting article about the Rayhunter, a tool to detect cellular spying that can run with cheap hardware [10].

  • [1] https://anarc.at/blog/2025-02-28-fish/
  • [2] https://longnow.org/ideas/is-god-a-mushroom/
  • [3] https://tinyurl.com/27wbb5ec
  • [4] https://tinyurl.com/2cvo42ro
  • [5] https://anarc.at/blog/2025-03-21-losing-war-internet/
  • [6] https://www.pacifict.com/story/
  • [7] https://www.youtube.com/watch?v=x8jdx-lf2Dw
  • [8] https://www.youtube.com/watch?v=geIhl_VE0IA
  • [9] https://www.youtube.com/watch?v=HIuXEU4H-XE
  • [10] https://tinyurl.com/28psvpx7
  • Planet DebianSimon Josefsson: On Binary Distribution Rebuilds

    I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive.

    One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I’m using the latest version of published packages for the rebuild.

    What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach.

    It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach.

    However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won’t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code.

    I’ve made an illustration of the effort I’m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.

    The illustration shows how the Debian main archive is used as input to rebuild another “stage #0” archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the “stage #1” archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution.

    How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve.

    What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well.

    Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don’t know how they would look like in practice.

    An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible.

    I’m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn’t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I’m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the “build-essentials-depends” package set.

    I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution.

    What do you think?

    PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

    Worse Than FailureCodeSOD: Nobody's BFF

    Legacy systems are hard to change, and even harder to eliminate. You can't simply do nothing though; as technology and user expectations change, you need to find ways to modernize and adapt the legacy system.

    That's what happened to Alicia's team. They had a gigantic, spaghetti-coded, monolithic application that was well past drinking age and had a front-end to match. Someone decided that they couldn't touch the complex business logic, but what they could do was replace the frontend code by creating an adapter service; the front end would call into this adapter, and the adapter would execute the appropriate methods in the backend.

    Some clever coder named this "Backend for Frontend" or "BFF".

    It was not anyone's BFF. For starters, this system didn't actually allow you to just connect a UI to the backend. No, that'd be too easy. This system was actually a UI generator.

    The way this works is that you feed it a schema file, written in JSON. This file specifies what input elements you want, some hints for layout, what validation you want the UI to perform, and even what CSS classes you want. Then you compile this as part of a gigantic .NET application, and deploy it, and then you can see your new UI.

    No one likes using it. No one is happy that it exists. Everyone wishes that they could just write frontends like normal people, and not use this awkward schema language.

    All that is to say, when Alicia's co-worker stood up shortly before lunch, said, "I'm taking off the rest of the day, BFF has broken me," it wasn't particularly shocking to hear- or even the first time that'd happened.

    Alicia, not heeding the warning inherent in that statement, immediately tracked down that dev's last work, and tried to understand what had been so painful.

        "minValue": 1900,
        "maxValue": 99,
    

    This, of course, had to be a bug. Didn't it? How could the maxValue be lower than the minValue?

    Let's look at the surrounding context.

    {
        "type": "eventValueBetweenValuesValidator",
        "eventType": "CalendarYear",
        "minValue": 1900,
        "maxValue": 99,
        "isCalendarBasedMaxValue": true,
        "message": "CalendarYear must be between {% raw %}{{minValue}}{% endraw %} and {% raw %}{{maxValue}}{% endraw %}."
    }
    

    I think this should make it perfectly clear what's happening. Oh, it doesn't? Look at the isCalendarBasedMaxValue field. It's true. There, that should explain everything. No, it doesn't? You're just more confused?

    The isCalendarBasedMaxValue says that the maxValue field should not be treated as a literal value, but instead, is the number of years in the future relative to the current year which are considered valid. This schema definition says "accept all years between 1900 and 2124 (at the time of this writing)." Next year, that top value goes up to 2125. Then 2126. And so on.

    As features go, it's not a terrible feature. But the implementation of the feature is incredibly counter-intuitive. At the end of the day, this is just bad naming: (ab)using min/max to do something that isn't really a min/max validation is the big issue here.

    Alicia writes:

    I couldn't come up with something more counterintuitive if I tried.

    Oh, don't sell yourself short, Alicia. I'm sure you could write something far, far worse if you tried. The key thing here is that clearly, nobody tried- they just sorta let things happen and definitely didn't think too hard about it.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    Planet DebianRuss Allbery: Review: Ghostdrift

    Review: Ghostdrift, by Suzanne Palmer

    Series: Finder Chronicles #4
    Publisher: DAW
    Copyright: May 2024
    ISBN: 0-7564-1888-7
    Format: Kindle
    Pages: 378

    Ghostdrift is a science fiction adventure and the fourth (and possibly final) book of the Finder Chronicles. You should definitely read this series in order and not start here, even though the plot of this book would stand alone.

    Following The Scavenger Door, in which he made enemies even more dramatically than he had in the previous books, Fergus Ferguson has retired to the beach on Coralla to become a tea master and take care of his cat. It's a relaxing, idyllic life and a much-needed total reset. Also, he's bored. The arrival of his alien friend Qai, in some kind of trouble and searching for him, is a complex balance between relief and disappointment.

    Bas Belos is one of the most notorious pirates of the Barrens. He has someone he wants Fergus to find: his twin sister, who disappeared ten years ago. Fergus has an unmatched reputation for finding things, so Belos kidnapped Qai's partner to coerce her into finding Fergus. It's not an auspicious beginning to a relationship, and Qai was ready to fight once they got her partner back, but Belos makes Fergus an offer of payment that, startlingly, is enough for him to take the job mostly voluntarily.

    Ghostdrift feels a bit like a return to Finder. Fergus is once again alone among strangers, on an assignment that he's mostly not discussing with others, piecing together clues and navigating tricky social dynamics. I missed his friends, particularly Ignatio, and while there are a few moments with AI ships, they play less of a role.

    But Fergus is so very good at what he does, and Palmer is so very good at writing it. This continues to be competence porn at its best. Belos's crew thinks Fergus is a pirate recruited from a prison colony, and he quietly sets out to win their trust with a careful balance of self-deprecation and unflappable skill, helped considerably by the hidden gift he acquired in Finder. The character development is subtle, but this feels like a Fergus who understands friendship and other people at a deeper and more satisfying level than the Fergus we first met three books ago.

    Palmer has a real talent for supporting characters and Ghostdrift is no exception. Belos's crew are criminals and murderers, and Palmer does remind the reader of that occasionally, but they're also humans with complex goals and relationships. Belos has earned their loyalty by being loyal and competent in a rough world where those attributes are rare. The morality of this story reminds me of infiltrating a gang: the existence of the gang is not a good thing, and the things they do are often indefensible, but they are an understandable reaction to a corrupt social system. The cops (in this case, the Alliance) are nearly as bad, as we've learned over the past couple of books, and considerably more insufferable. Fergus balances the ethical complexity in a way that I found satisfyingly nuanced, while quietly insisting on his own moral lines.

    There is a deep science fiction plot here, possibly the most complex of the series so far. The disappearance of Belos's sister is the tip of an iceberg that leads to novel astrophysics, dangerous aliens, mysterious ruins, and an extended period on a remote and wreck-strewn planet. I groaned a bit when the characters ended up on the planet, since treks across primitive alien terrain with jury-rigged technology are one of my least favorite science fiction tropes, but I need not have worried. Palmer knows what she's doing; the pace of the plot does slow a bit at first, but it quickly picks up again, adding enough new setting and plot complications that I never had a chance to be bored by alien plants. It helps that we get another batch of excellent supporting characters for Fergus to observe and win over.

    This series is such great science fiction. Each book becomes my new favorite, and Ghostdrift is no exception. The skeleton of its plot is a satisfying science fiction mystery with multiple competing factions, hints of fascinating galactic politics, complicated technological puzzles, and a sense of wonder that reminds me of reading Larry Niven's Known Space series. But the characters are so much better and more memorable than classic SF; compared to Fergus, Niven's Louis Wu barely exists and is readily forgotten as soon as the story is over. Fergus starts as a quiet problem-solver, but so much character depth unfolds over the course of this series. The ending of this book was delightfully consistent with everything we've learned about Fergus, but also the sort of ending that it's hard to imagine the Fergus from Finder knowing how to want.

    Ghostdrift, like each of the books in this series, reaches a satisfying stand-alone conclusion, but there is no reason within the story for this to be the last of the series. The author's acknowledgments, however, says that this the end. I admit to being disappointed, since I want to read more about Fergus and there are numerous loose ends that could be explored. More importantly, though, I hope Palmer will write more novels in any universe of her choosing so that I can buy and read them.

    This is fantastic stuff. This review comes too late for the Hugo nominating deadline, but I hope Palmer gets a Best Series nomination for the Finder Chronicles as a whole. She deserves it.

    Rating: 9 out of 10

    xkcdOrogeny

    Krebs on SecurityHow Each Pillar of the 1st Amendment is Under Attack

    “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” -U.S. Constitution, First Amendment.

    Image: Shutterstock, zimmytws.

    In an address to Congress this month, President Trump claimed he had “brought free speech back to America.” But barely two months into his second term, the president has waged an unprecedented attack on the First Amendment rights of journalists, students, universities, government workers, lawyers and judges.

    This story explores a slew of recent actions by the Trump administration that threaten to undermine all five pillars of the First Amendment to the U.S. Constitution, which guarantees freedoms concerning speech, religion, the media, the right to assembly, and the right to petition the government and seek redress for wrongs.

    THE RIGHT TO PETITION

    The right to petition allows citizens to communicate with the government, whether to complain, request action, or share viewpoints — without fear of reprisal. But that right is being assaulted by this administration on multiple levels. For starters, many GOP lawmakers are now heeding their leadership’s advice to stay away from local town hall meetings and avoid the wrath of constituents affected by the administration’s many federal budget and workforce cuts.

    Another example: President Trump recently fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.

    The biggest story by far this week was the bombshell from The Atlantic editor Jeffrey Goldberg, who recounted how he was inadvertently added to a Signal group chat with National Security Advisor Michael Waltz and 16 other Trump administration officials discussing plans for an upcoming attack on Yemen.

    One overlooked aspect of Goldberg’s incredible account is that by planning and coordinating the attack on Signal — which features messages that can auto-delete after a short time — administration officials were evidently seeking a way to avoid creating a lasting (and potentially FOIA-able) record of their deliberations.

    “Intentional or not, use of Signal in this context was an act of erasure—because without Jeffrey Goldberg being accidentally added to the list, the general public would never have any record of these communications or any way to know they even occurred,” Tony Bradley wrote this week at Forbes.

    Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration.

    On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.

    The POTUS recently issued several executive orders railing against specific law firms with attorneys who worked legal cases against him. On Friday, the president announced that the law firm of Skadden, Arps, Slate, Meager & Flom had agreed to provide $100 million in pro bono work on issues that he supports.

    Trump issued another order naming the firm Paul, Weiss, Rifkind, Wharton & Garrison, which ultimately agreed to pledge $40 million in pro bono legal services to the president’s causes.

    Other Trump executive orders targeted law firms Jenner & Block and WilmerHale, both of which have attorneys that worked with special counsel Robert Mueller on the investigation into Russian interference in the 2016 election. But this week, two federal judges in separate rulings froze parts of those orders.

    “There is no doubt this retaliatory action chills speech and legal advocacy, and that is qualified as a constitutional harm,” wrote Judge Richard Leon, who ruled against the executive order targeting WilmerHale.

    President Trump recently took the extraordinary step of calling for the impeachment of federal judges who rule against the administration. Trump called U.S. District Judge James Boasberg a “Radical Left Lunatic” and urged he be removed from office for blocking deportation of Venezuelan alleged gang members under a rarely invoked wartime legal authority.

    In a rare public rebuke to a sitting president, U.S. Supreme Court Justice John Roberts issued a statement on March 18 pointing out that “For more than two centuries, it has been established that impeachment is not an appropriate response to disagreement concerning a judicial decision.”

    The U.S. Constitution provides that judges can be removed from office only through impeachment by the House of Representatives and conviction by the Senate. The Constitution also states that judges’ salaries cannot be reduced while they are in office.

    Undeterred, House Speaker Mike Johnson this week suggested the administration could still use the power of its purse to keep courts in line, and even floated the idea of wholesale eliminating federal courts.

    “We do have authority over the federal courts as you know,” Johnson said. “We can eliminate an entire district court. We have power of funding over the courts, and all these other things. But desperate times call for desperate measures, and Congress is going to act, so stay tuned for that.”

    FREEDOM OF ASSEMBLY

    President Trump has taken a number of actions to discourage lawful demonstrations at universities and colleges across the country, threatening to cut federal funding for any college that supports protests he deems “illegal.”

    A Trump executive order in January outlined a broad federal crackdown on what he called “the explosion of antisemitism” on U.S. college campuses. This administration has asserted that foreign students who are lawfully in the United States on visas do not enjoy the same free speech or due process rights as citizens.

    Reuters reports that the acting civil rights director at the Department of Education (DOE) on March 10 sent letters to 60 educational institutions warning they could lose federal funding if they don’t do more to combat anti-semitism. On March 20, Trump issued an order calling for the closure of the DOE.

    Meanwhile, U.S. Immigration and Customs Enforcement (ICE) agents have been detaining and trying to deport pro-Palestinian students who are legally in the United States. The administration is targeting students and academics who spoke out against Israel’s attacks on Gaza, or who were active in campus protests against U.S. support for the attacks. Secretary of State Marco Rubio told reporters Thursday that at least 300 foreign students have seen their visas revoked under President Trump, a far higher number than was previously known.

    In his first term, Trump threatened to use the national guard or the U.S. military to deal with protesters, and in campaigning for re-election he promised to revisit the idea.

    “I think the bigger problem is the enemy from within,” Trump told Fox News in October 2024. “We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big — and it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen.”

    This term, Trump acted swiftly to remove the top judicial advocates in the armed forces who would almost certainly push back on any request by the president to use U.S. soldiers in an effort to quell public protests, or to arrest and detain immigrants. In late February, the president and Defense Secretary Pete Hegseth fired the top legal officers for the military services — those responsible for ensuring the Uniform Code of Military Justice is followed by commanders.

    Military.com warns that the purge “sets an alarming precedent for a crucial job in the military, as President Donald Trump has mused about using the military in unorthodox and potentially illegal ways.” Hegseth told reporters the removals were necessary because he didn’t want them to pose any “roadblocks to orders that are given by a commander in chief.”

    FREEDOM OF THE PRESS

    President Trump has sued a number of U.S. news outlets, including 60 Minutes, CNN, The Washington Post, The New York Times and other smaller media organizations for unflattering coverage.

    In a $10 billion lawsuit against 60 Minutes and its parent Paramount, Trump claims they selectively edited an interview with former Vice President Kamala Harris prior to the 2024 election. The TV news show last month published transcripts of the interview at the heart of the dispute, but Paramount is reportedly considering a settlement to avoid potentially damaging its chances of winning the administration’s approval for a pending multibillion-dollar merger.

    The president sued The Des Moines Register and its parent company, Gannett, for publishing a poll showing Trump trailing Harris in the 2024 presidential election in Iowa (a state that went for Trump). The POTUS also is suing the Pulitzer Prize board over 2018 awards given to The New York Times and The Washington Post for their coverage of purported Russian interference in the 2016 election.

    Whether or not any of the president’s lawsuits against news organizations have merit or succeed is almost beside the point. The strategy behind suing the media is to make reporters and newsrooms think twice about criticizing or challenging the president and his administration. The president also knows some media outlets will find it more expedient to settle.

    Trump also sued ABC News and George Stephanopoulos for stating that the president had been found liable for “rape” in a civil case [Trump was found liable of sexually abusing and defaming E. Jean Carroll]. ABC parent Disney settled that claim by agreeing to donate $15 million to the Trump Presidential Library.

    Following the attack on the U.S. Capitol on Jan. 6, 2021, Facebook blocked President Trump’s account. Trump sued Meta, and after the president’s victory in 2024 Meta settled and agreed to pay Trump $25 million: $22 million would go to his presidential library, and the rest to legal fees. Meta CEO Mark Zuckerberg also announced Facebook and Instagram would get rid of fact-checkers and rely instead on reader-submitted “community notes” to debunk disinformation on the social media platform.

    Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), has pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.

    According to Reuters, the complaints call for an investigation into how ABC News moderated the pre-election TV debate between Trump and Biden, and appearances of then-Vice President Harris on 60 Minutes and on NBC’s “Saturday Night Live.”

    Since then, the FCC has opened investigations into NPR and PBS, alleging that they are breaking sponsorship rules. The Center for Democracy & Technology (CDT), a think tank based in Washington, D.C., noted that the FCC is also investigating KCBS in San Francisco for reporting on the location of federal immigration authorities.

    “Even if these investigations are ultimately closed without action, the mere fact of opening them – and the implicit threat to the news stations’ license to operate – can have the effect of deterring the press from news coverage that the Administration dislikes,” the CDT’s Kate Ruane observed.

    Trump has repeatedly threatened to “open up” libel laws, with the goal of making it easier to sue media organizations for unfavorable coverage. But this week, the U.S. Supreme Court declined to hear a challenge brought by Trump donor and Las Vegas casino magnate Steve Wynn to overturn the landmark 1964 decision in New York Times v. Sullivan, which insulates the press from libel suits over good-faith criticism of public figures.

    The president also has insisted on picking which reporters and news outlets should be allowed to cover White House events and participate in the press pool that trails the president. He barred the Associated Press from the White House and Air Force One over their refusal to call the Gulf of Mexico by another name.

    And the Defense Department has ordered a number of top media outlets to vacate their spots at the Pentagon, including CNN, The Hill, The Washington Post, The New York Times, NBC News, Politico and National Public Radio.

    “Incoming media outlets include the New York Post, Breitbart, the Washington Examiner, the Free Press, the Daily Caller, Newsmax, the Huffington Post and One America News Network, most of whom are seen as conservative or favoring Republican President Donald Trump,” Reuters reported.

    FREEDOM OF SPEECH

    Shortly after Trump took office again in January 2025, the administration began circulating lists of hundreds of words that government staff and agencies shall not use in their reports and communications.

    The Brookings Institution notes that in moving to comply with this anti-speech directive, federal agencies have purged countless taxpayer-funded data sets from a swathe of government websites, including data on crime, sexual orientation, gender, education, climate, and global development.

    The New York Times reports that in the past two months, hundreds of terabytes of digital resources analyzing data have been taken off government websites.

    “While in many cases the underlying data still exists, the tools that make it possible for the public and researchers to use that data have been removed,” The Times wrote.

    On Jan. 27, Trump issued a memo (PDF) that paused all federally funded programs pending a review of those programs for alignment with the administration’s priorities. Among those was ensuring that no funding goes toward advancing “Marxist equity, transgenderism, and green new deal social engineering policies.”

    According to the CDT, this order is a blatant attempt to force government grantees to cease engaging in speech that the current administration dislikes, including speech about the benefits of diversity, climate change, and LGBTQ issues.

    “The First Amendment does not permit the government to discriminate against grantees because it does not like some of the viewpoints they espouse,” the CDT’s Ruane wrote. “Indeed, those groups that are challenging the constitutionality of the order argued as much in their complaint, and have won an injunction blocking its implementation.”

    On January 20, the same day Trump issued an executive order on free speech, the president also issued an executive order titled “Reevaluating and Realigning United States Foreign Aid,” which froze funding for programs run by the U.S. Agency for International Development (USAID). Among those were programs designed to empower civil society and human rights groups, journalists and others responding to digital repression and Internet shutdowns.

    According to the Electronic Frontier Foundation (EFF), this includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.

    “While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies,” the EFF wrote about the USAID disruptions. “As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development.”

    On March 14, the president signed another executive order that effectively gutted the U.S. Agency for Global Media (USAGM), which oversees or funds media outlets including Radio Free Europe/Radio Liberty and Voice of America (VOA). The USAGM also oversees Radio Free Asia, which supporters say has been one of the most reliable tools used by the government to combat Chinese propaganda.

    But this week, U.S. District Court Judge Royce Lamberth, a Reagan appointee, temporarily blocked USAGM’s closure by the administration.

    “RFE/RL has, for decades, operated as one of the organizations that Congress has statutorily designated to carry out this policy,” Lamberth wrote in a 10-page opinion. “The leadership of USAGM cannot, with one sentence of reasoning offering virtually no explanation, force RFE/RL to shut down — even if the President has told them to do so.”

    FREEDOM OF RELIGION

    The Trump administration rescinded a decades-old policy that instructed officers not to take immigration enforcement actions in or near “sensitive” or “protected” places, such as churches, schools, and hospitals.

    That directive was immediately challenged in a case brought by a group of Quakers, Baptists and Sikhs, who argued the policy reversal was keeping people from attending services for fear of being arrested on civil immigration violations. On Feb. 24, a federal judge agreed and blocked ICE agents from entering churches or targeting migrants nearby.

    The president’s executive order allegedly addressing antisemitism came with a fact sheet that described college campuses as “infested” with “terrorists” and “jihadists.” Multiple faith groups expressed alarm over the order, saying it attempts to weaponize antisemitism and promote “dehumanizing anti-immigrant policies.

    The president also announced the creation of a “Task Force to Eradicate Anti-Christian Bias,” to be led by Attorney General Pam Bondi. Never mind that Christianity is easily the largest faith in America and that Christians are well-represented in Congress.

    The Rev. Paul Brandeis Raushenbush, a Baptist minister and head of the progressive Interfaith Alliance, issued a statement accusing Trump of hypocrisy in claiming to champion religion by creating the task force.

    “From allowing immigration raids in churches, to targeting faith-based charities, to suppressing religious diversity, the Trump Administration’s aggressive government overreach is infringing on religious freedom in a way we haven’t seen for generations,” Raushenbush said.

    A statement from Americans United for Separation of Church and State said the task force could lead to religious persecution of those with other faiths.

    “Rather than protecting religious beliefs, this task force will misuse religious freedom to justify bigotry, discrimination, and the subversion of our civil rights laws,” said Rachel Laser, the group’s president and CEO.

    Where is President Trump going with all these blatant attacks on the First Amendment? The president has made no secret of his affection for autocratic leaders and “strongmen” around the world, and he is particularly enamored with Hungary’s far-right Prime Minister Viktor Orbán, who has visited Trump’s Mar-a-Lago resort twice in the past year.

    A March 15 essay in The Atlantic by Hungarian investigative journalist András Pethő recounts how Orbán rose to power by consolidating control over the courts, and by building his own media universe while simultaneously placing a stranglehold on the independent press.

    “As I watch from afar what’s happening to the free press in the United States during the first weeks of Trump’s second presidency — the verbal bullying, the legal harassment, the buckling by media owners in the face of threats — it all looks very familiar,” Pethő wrote. “The MAGA authorities have learned Orbán’s lessons well.”

    ,

    Cory DoctorowWhy I don’t like AI art

    Norman Rockwell’s ‘self portrait.’ All the Rockwell faces have been replaced with HAL 9000 from Kubrick’s ‘2001: A Space Odyssey.’ His signature has been modified with a series of rotations and extra symbols. He has ten fingers on his one visible hand.

    This week on my podcast, I read Why I don’t like AI art, a column from last week’s Pluralistic newsletter:

    Which brings me to art. As a working artist in his third decade of professional life, I’ve concluded that the point of art is to take a big, numinous, irreducible feeling that fills the artist’s mind, and attempt to infuse that feeling into some artistic vessel – a book, a painting, a song, a dance, a sculpture, etc – in the hopes that this work will cause a loose facsimile of that numinous, irreducible feeling to manifest in someone else’s mind.

    Art, in other words, is an act of communication – and there you have the problem with AI art. As a writer, when I write a novel, I make tens – if not hundreds – of thousands of tiny decisions that are in service to this business of causing my big, irreducible, numinous feeling to materialize in your mind. Most of those decisions aren’t even conscious, but they are definitely decisions, and I don’t make them solely on the basis of probabilistic autocomplete. One of my novels may be good and it may be bad, but one thing is definitely is is rich in communicative intent. Every one of those microdecisions is an expression of artistic intent.


    MP3

    (Image: Cryteria, CC BY 3.0, modified)

    Planet DebianSteinar H. Gunderson: It's always the best ones that die first

    Berge Schwebs Bjørlo, aged 40, died on March 4th in an avalanche together with his friend Ulf, while on winter holiday.

    When writing about someone who recently died, it is common to make lists. Lists of education, of where they worked, on projects they did.

    But Berge wasn't common. Berge was an outlier. A paradox, even.

    Berge was one of my closest friends; someone who always listened, someone you could always argue with (“I'm a pacifist, but I'm aware that this is an extreme position”) but could rarely be angry at. But if you ask around, you'll see many who say similar things; how could someone be so close to so many at the same time?

    Berge had running jokes going on 20 years or more. Many of them would be related to his background from Bergen; he'd often talk about “the un-central east” (aka Oslo), yet had to admit at some point that actually started liking the city. Or about his innate positivity (“I'm in on everything but suicide and marriage!”). I know a lot of people have described his humor as dry, but I found him anything but. Just a free flow of living.

    He lived his life in free software, but rarely in actually writing code; I don't think I've seen a patch from him, and only the occasional bug report. Instead, he would spend his time guiding others; he spent a lot of time in PostgreSQL circles, helping people with installation or writing queries or chiding them for using an ORM (“I don't understand why people love to make life so hard for themselves”) or just discussing life, love and everything. Somehow, some people's legacy is just the number of others they touched, and Berge touched everyone he met. Kindness is not something we do well in the free software community, but somehow, it came natural to him. I didn't understand until after he died why he was so chronically bad at reading backlog and hard to get hold of; he was interacting with so many people, always in the present and never caring much about the past.

    I remember that Berge once visited my parents' house, and was greeted by our dog, who after a pat promptly went back to relaxing lazily on the floor. “Awh! If I were a dog, that's the kind of dog I'd be.” In retrospect, for someone who lived a lot of his life in 300 km/h (at times quite literally), it was an odd thing to say, but it was just one of those paradoxes.

    Berge loved music. He'd argue for intensely political punk, but would really consume everything with great enthuisasm and interest. One of the last albums I know he listened to was Thomas Dybdahl's “… that great October sound”:

    Tear us in different ways but leave a thread throughout the maze
    In case I need to find my way back home
    All these decisions make for people living without faith
    Fumbling in the dark nowhere to roam

    Dreamweaver
    I'll be needing you tomorrow and for days to come
    Cause I'm no daydreamer
    But I'll need a place to go if memory fails me & let you slip away

    Berge wasn't found by a lazy dog. He was found by Shane, a very good dog.

    Somehow, I think he would have approved of that, too.

    Picture of Berge

    Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.21 on CRAN: New Upstream

    Version 0.0.21 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

    This release updates the code to the version 1.15.2 of spdlog which was released this weekend as well.

    The NEWS entry for this release follows.

    Changes in RcppSpdlog version 0.0.21 (2025-03-30)

    • Upgraded to upstream release spdlog 1.15.2 (including fmt 11.1.4)

    Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    David BrinAnd yet-more news from (or about) Spaaaaaace!

    NOTE: I offer a bit of a riff about the rarity of science - not just on Earth but possibly across the cosmos - at the end. 

    We are gadually trying to resume 'normal' life after our family suffered a 'disruption' in our living arrangements that has left us frazzled, with little time for blog updates. But things are a bit better now, so here is... a roundup of recent* space news and updates.

    *(Well, 'recent' as of when these postings were actually drafted, in January, before we realized how crazy things were gonna get!)

    == Heading for the moon ==

    Sending landers to the lunar surface: In mid-January, a SpaceX Falcon 9 rocket launched two commercial landers - Firefly Aerospace's Blue Ghost lander and Japan's ispace's Resilence lander - to the moon. 

    The landers contain scientific instruments to analyze the lunar regolith and magnetosphere, and set up a moon-based global navigation system, laying the groundwork for future lunar missions.

    *As of March 30... well... any space junkies know how it went.


    == Rogue planets all over! ==

    One of the imperfectly insufficient (by itself) but substantially plausible theories for the Great Silence or “Fermi Paradox” (terrible name) is that interstellar travel… even at just 10% of light speed… is made very difficult by a minefield of hidden obstacles.  No, I am not talking about my short story “Crystal Spheres.”  But rather, these would be rogue planets that are untethered from stars. Every year we find they are more common in the galaxy.

    For example, the infrared-sensitive Webb Telescope has found hundreds… down to Saturn size, just in the Orion Nebula, alone! Forty-two of them are in binary pairs. Wow. Implicit: billions of free-floating planets in the darkness between the stars.

    One more incredible accomplishment by this fantastic instrument that this fantastic, scientific civilization created, in our steady and accelerating progress as apprentices in the Laboratory of Creation! 


    And yet some ignore the almost (or actual) theological significance of these incredible accomplishments (Robots roaming Mars! New human-made life forms! The new skills to save this beautiful world from … ourselves!) Okay, grad students in Creation’s Lab should respect those who clutch the Kindergarten text given to illiterate shepherds. Fine. 


    But those who wage all-out war vs science are clearly the real heretics, here.


    See more incredible Webb Wonders!  A way-kewl podcast from Fraser Cain



    == Monitoring Methane Emissions ==


    Among the worst criminals alive today are those who are deliberately venting methane into the atmosphere. After GOP Congresses deliberately canceled or slashed the satellites to track down vents and Trump delayed them, we now, at last, have the policing tools. A satellite that measures methane leaks from oil and gas companies is set to start circulating the Earth 15 times a day next month. Google plans to have the data mapped by the end of the year for the whole world to see. (Thanks Sergey.)

    Methane is a potent greenhouse gas estimated to be responsible for nearly a third of human-caused global warming. Scientists say slashing methane emissions is one of the fastest ways to slow the climate crisis because methane has 80 times the warming power of carbon dioxide over a decade. Though farming is the largest source of methane emissions from human activities, the energy sector is a close second. Oil, gas, and coal operations are thought to account for 40% of global methane emissions from human activities. The IEA says focusing on the energy sector should be a priority, in part because reducing methane leaks is cost-effective. Leaking gas can be captured and sold, and the technology to do that is relatively cheap.

    Two new methane-detecting satellites - Carbon Mapper and MethaneSAT/EDF are now surveying the planet's climate. Because the Biden admin pushed through the quality methane satellites, the information will be so widely seen that members of the public will be able to act on their own - even despite a suborned EPA and Justice dept.  A case where the right may be bitten by the 'market/consumer alternative to government' that they have long raved about.


    == Dark comets, Dwarf galaxies - and Dark Matter ==

    If I had followed my original scientific path – not lured away by the likes of you telling me to write more scifi – I’d likely have been in the mix of these studies of “dark comets,” whose orbits get significantly altered by gassy or dusty emissions, the way it happens with regular, icy comets, but without any visible signs of watery volatiles. “dark comets are different from another intermediary category between asteroids and comets, known as active asteroids, although there may be some overlap. Active asteroids are objects without ice that produce a cloud of dust around them, for a variety of reasons…” 

    Only the Dark Comets – and some include the odd cigar-shaped interstellar visitor ‘Oumuamua' – still have no firm explanation. Though some theories suggest emission of some volatile substance that doesn’t leave an ionized spectral trace.

    The Milky Way’s central (huge) black hole is spinning surprisingly fast and out of orientation with the rest of the galaxy; the reasons remain unknown. Now, data from the Event Horizon Telescope - that first captured the black hole's image in 2022 has revealed a clue: The Sagittarius A* we see today was born from a cataclysmic merger with another giant black hole billions of years ago.

    Dark matter might not just be the silent partner of the universe—it could be the secret to understanding how supermassive black holes unite in their deadly dance. 


    Attempts to figure out dark matter have pinned hopes on the possibility that the dark… bits… whatever they might be… interact with regular matter in some way – even very slightly – beyond just gravity. At least that’s been the hope of particle physicists with their big machines. So far, the indicators suggest ‘only gravity.’ But this study of nearby anomalous dwarf galaxies hints there might be just a little something more.



    == A couple of final notes about you-know-what ==


    Science is - above all - about chasing down what's true about objective reality, even when the results conflict with your wishes or preconceptions. 


    This human-invented process has led to all of the benefits of enlightenment: unprecedented wealth, comfort, knowledge, safety and - yes - comparative peace... along withg our recent ambitions to overcome a myriad errors through cheerful exchange of criticism. Errors like prejudicial assumptions about whole classes of people. Errors like mismanaging a fragile planet.  


    Alas, science is a rare phenomenon. Rare across human history and -- given the way that evolution works -- probably rare across the universe. (My own top explanation for the Fermi Paradox, by the way.)


    Across human history, science - and its ancillary arts like equality before law - almost never happened. Instead, people in most societies preferred stories. Incantations about the world, told by their parents and then by priests and by kings.  I know about this, having had successful careers in both science and storytelling. I know the differences and the overlaps very well. 


    While romance and stories are essential to being human, they also can lead directly to horrors and Auschwitz, if they allow evil incantation-spewers to rile up whole populations toward hatred and cauterized hope. 


    Anyone who does not recognize what I just described as THE essential thing now happening across the globe is already lost to reason. 


    Moreover, if the recent trend - reverting human civilization back to 10,000 years of nescient rule by inheritance brats and chanting incantation spinners - does succeed at suppressing the rare era of science, then we'll truly have our answer for why no voices can be heard ac ross the cosmos.


    Cryptogram Cell Phone OPSEC for Border Crossings

    I have heard stories of more aggressive interrogation of electronic devices at US border crossings. I know a lot about securing computers, but very little about securing phones.

    Are there easy ways to delete data—files, photos, etc.—on phones so it can’t be recovered? Does resetting a phone to factory defaults erase data, or is it still recoverable? That is, does the reset erase the old encryption key, or just sever the password that access that key? When the phone is rebooted, are deleted files still available?

    We need answers for both iPhones and Android phones. And it’s not just the US; the world is going to become a more dangerous place to oppose state power.

    Cryptogram The Signal Chat Leak and the NSA

    US National Security Advisor Mike Waltz, who started the now-infamous group chat coordinating a US attack against the Yemen-based Houthis on March 15, is seemingly now suggesting that the secure messaging service Signal has security vulnerabilities.

    "I didn’t see this loser in the group," Waltz told Fox News about Atlantic editor in chief Jeffrey Goldberg, whom Waltz invited to the chat. "Whether he did it deliberately or it happened in some other technical mean, is something we’re trying to figure out."

    Waltz’s implication that Goldberg may have hacked his way in was followed by a report from CBS News that the US National Security Agency (NSA) had sent out a bulletin to its employees last month warning them about a security "vulnerability" identified in Signal.

    The truth, however, is much more interesting. If Signal has vulnerabilities, then China, Russia, and other US adversaries suddenly have a new incentive to discover them. At the same time, the NSA urgently needs to find and fix any vulnerabilities quickly as it can—and similarly, ensure that commercial smartphones are free of backdoors—access points that allow people other than a smartphone’s user to bypass the usual security authentication methods to access the device’s contents.

    That is essential for anyone who wants to keep their communications private, which should be all of us.

    It’s common knowledge that the NSA’s mission is breaking into and eavesdropping on other countries’ networks. (During President George W. Bush’s administration, the NSA conducted warrantless taps into domestic communications as well—surveillance that several district courts ruled to be illegal before those decisions were later overturned by appeals courts. To this day, many legal experts maintain that the program violated federal privacy protections.) But the organization has a secondary, complementary responsibility: to protect US communications from others who want to spy on them. That is to say: While one part of the NSA is listening into foreign communications, another part is stopping foreigners from doing the same to Americans.

    Those missions never contradicted during the Cold War, when allied and enemy communications were wholly separate. Today, though, everyone uses the same computers, the same software, and the same networks. That creates a tension.

    When the NSA discovers a technological vulnerability in a service such as Signal (or buys one on the thriving clandestine vulnerability market), does it exploit it in secret, or reveal it so that it can be fixed? Since at least 2014, a US government interagency "equities" process has been used to decide whether it is in the national interest to take advantage of a particular security flaw, or to fix it. The trade-offs are often complicated and hard.

    Waltz—along with Vice President J.D. Vance, Defense Secretary Pete Hegseth, and the other officials in the Signal group—have just made the trade-offs much tougher to resolve. Signal is both widely available and widely used. Smaller governments that can’t afford their own military-grade encryption use it. Journalists, human rights workers, persecuted minorities, dissidents, corporate executives, and criminals around the world use it. Many of these populations are of great interest to the NSA.

    At the same time, as we have now discovered, the app is being used for operational US military traffic. So, what does the NSA do if it finds a security flaw in Signal?

    Previously, it might have preferred to keep the flaw quiet and use it to listen to adversaries. Now, if the agency does that, it risks someone else finding the same vulnerability and using it against the US government. And if it was later disclosed that the NSA could have fixed the problem and didn’t, then the results might be catastrophic for the agency.

    Smartphones present a similar trade-off. The biggest risk of eavesdropping on a Signal conversation comes from the individual phones that the app is running on. While it’s largely unclear whether the US officials involved had downloaded the app onto personal or government-issued phones—although Witkoff suggested on X that the program was on his "personal devices"—smartphones are consumer devices, not at all suitable for classified US government conversations. An entire industry of spyware companies sells capabilities to remotely hack smartphones for any country willing to pay. More capable countries have more sophisticated operations. Just last year, attacks that were later attributed to China attempted to access both President Donald Trump and Vance’s smartphones. Previously, the FBI—as well as law enforcement agencies in other countries—have pressured both Apple and Google to add "backdoors" in their phones to more easily facilitate court-authorized eavesdropping.

    These backdoors would create, of course, another vulnerability to be exploited. A separate attack from China last year accessed a similar capability built into US telecommunications networks.

    The vulnerabilities equities have swung against weakened smartphone security and toward protecting the devices that senior government officials now use to discuss military secrets. That also means that they have swung against the US government hoarding Signal vulnerabilities—and toward full disclosure.

    This is plausibly good news for Americans who want to talk among themselves without having anyone, government or otherwise, listen in. We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

    Because of the Signal chat leak, it’s less likely that they’ll use vulnerabilities in Signal to do that. Equally, bad actors such as drug cartels may also feel safer using Signal. Their security against the US government lies in the fact that the US government shares their vulnerabilities. No one wants their secrets exposed.

    I have long advocated for a "defense dominant" cybersecurity strategy. As long as smartphones are in the pocket of every government official, police officer, judge, CEO, and nuclear power plant operator—and now that they are being used for what the White House now calls calls  "sensitive," if not outright classified conversations among cabinet members—we need them to be as secure as possible. And that means no government-mandated backdoors.

    We may find out more about how officials—including the vice president of the United States—came to be using Signal on what seem to be consumer-grade smartphones, in a apparent breach of the laws on government records. It’s unlikely that they really thought through the consequences of their actions.

    Nonetheless, those consequences are real. Other governments, possibly including US allies, will now have much more incentive to break Signal’s security than they did in the past, and more incentive to hack US government smartphones than they did before March 24.

    For just the same reason, the US government has urgent incentives to protect them.

    This essay was originally published in Foreign Policy.

    Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.8 on CRAN: Build Refinements

    ziggurats

    A new release 0.1.8 of RcppZiggurat is now on the CRAN network for R, following up on the 0.1.7 release last week which was the first release in four and a half years.

    The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and others which provides very fast draws from a Normal (or Exponential) distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

    This release switches the vignette to the standard trick of premaking it as a pdf and including it in a short Sweave document that imports it via pdfpages; this minimizes build-time dependencies on other TeXLive components. It also incorporates a change contributed by Tomas to rely on the system build of the GSL on Windows as well if Rtools 42 or later is found. No other changes.

    The NEWS file entry below lists all changes.

    Changes in version 0.1.8 (2025-03-30)

    • The vignette is now premade and rendered as Rnw via pdfpage to minimize the need for TeXLive package at build / install time (Dirk)

    • Windows builds now use the GNU GSL when Rtools is 42 or later (Tomas Kalibera in #25)

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the Rcppziggurat page or the GitHub repository.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    365 TomorrowsThe Unsuitable Girl

    Author: Jessica Pickard Once again Sam asked himself why he was standing here, in this field, miles out of town, staring into an increasingly dusky sky. Well he was here for the money of course. God knows he could use that right now. But he was also here, if he was honest, for the girl, […]

    The post The Unsuitable Girl appeared first on 365tomorrows.

    Planet DebianRuss Allbery: Review: Cascade Failure

    Review: Cascade Failure, by L.M. Sagas

    Series: Ambit's Run #1
    Publisher: Tor
    Copyright: 2024
    ISBN: 1-250-87126-3
    Format: Kindle
    Pages: 407

    Cascade Failure is a far-future science fiction adventure with a small helping of cyberpunk vibes. It is the first of a (so far) two-book series, and was the author's first novel.

    The Ambit is an old and small Guild ship, not much to look at, but it holds a couple of surprises. One is its captain, Eoan, who is an AI with a deep and insatiable curiosity that has driven them and their ship farther and farther out into the Spiral. The other is its surprisingly competent crew: a battle-scarred veteran named Saint who handles the fighting, and a talented engineer named Nash who does literally everything else. The novel opens with them taking on supplies at Aron Outpost. A supposed Guild deserter named Jalsen wanders into the ship looking for work.

    An AI ship with a found-family crew is normally my catnip, so I wanted to love this book. Alas, I did not.

    There were parts I liked. Nash is great: snarky, competent, and direct. Eoan is a bit distant and slightly more simplistic of a character than I was expecting, but I appreciated the way Sagas put them firmly in charge of the ship and departed from the conventional AI character presentation. Once the plot starts in earnest (more on that in a moment), we meet Anke, the computer hacker, whose charming anxiety reaction is a complete inability to stop talking and who adds some needed depth to the character interactions. There's plenty of action, a plot that makes at least some sense, and a few moments that almost achieved the emotional payoff the author was attempting.

    Unfortunately, most of the story focuses on Saint and Jal, and both of them are irritatingly dense cliches.

    The moment Jal wanders onto the Ambit in the first chapter, the reader is informed that Jal, Saint, and Eoan have a history. The crew of the Ambit spent a year looking for Jal and aren't letting go of him now that they've found him. Jal, on the other hand, clearly blames Saint for something and is not inclined to trust him. Okay, fine, a bit generic of a setup but the writing moved right along and I was curious enough.

    It then takes a full 180 pages before the reader finds out what the hell is going on with Saint and Jal. Predictably, it's a stupid misunderstanding that could have been cleared up with one conversation in the second chapter.

    Cascade Failure does not contain a romance (and to the extent that it hints at one, it's a sapphic romance), but I swear Saint and Jal are both the male protagonist from a certain type of stereotypical heterosexual romance novel. They're both the brooding man with the past, who is too hurt to trust anyone and assumes the worst because he's unable to use his words or ask an open question and then listen to the answer. The first half of this book is them being sullen at each other at great length while both of them feel miserable. Jal keeps doing weird and suspicious things to resolve a problem that would have been far more easily resolved by the rest of the crew if he would offer any explanation at all. It's not even suspenseful; we've read about this character enough times to know that he'll turn out to have a heart of gold and everything will be a misunderstanding. I found it tedious. Maybe people who like slow burn romances with this character type will have a less negative reaction.

    The real plot starts at about the time Saint and Jal finally get their shit sorted out. It turns out to have almost nothing to do with either of them. The environmental control systems of worlds are suddenly failing (hence the book title), and Anke, the late-arriving computer programmer and terraforming specialist, has a rather wild theory about what's happening. This leads to a lot of action, some decent twists, and a plot that felt very cyberpunk to me, although unfortunately it culminates in an absurdly-cliched action climax.

    This book is an action movie that desperately wants to make you feel all the feels, and it worked about as well as that typically works in action movies for me. Jaded cynicism and an inability to communicate are not the ways to get me to have an emotional reaction to a book, and Jal (once he finally starts talking) is so ridiculously earnest that it's like reading the adventures of a Labrador puppy. There was enough going on that it kept me reading, but not enough for the story to feel satisfying. I needed a twist, some depth, way more Nash and Anke and way less of the men, something.

    Everyone is going to compare this book to Firefly, but Firefly had better banter, created more complex character interactions due to the larger and more varied crew, and played the cynical mercenary for laughs instead of straight, all of which suited me better. This is not a bad book, particularly once it gets past the halfway point, but it's not that memorable either, at least for me. If you're looking for a space adventure with heavy action hero and military SF vibes that wants to be about Big Feelings but gets there in mostly obvious ways, you could do worse. If you're looking for a found-family starship crew story more like Becky Chambers, I think you'll find this one a bit too shallow and obvious.

    Not really recommended, although there's nothing that wrong with it and I'm sure other people's experience will differ.

    Followed by Gravity Lost, which I'm unlikely to read.

    Rating: 6 out of 10

    ,

    Planet DebianDirk Eddelbuettel: tinythemes 0.0.3 at CRAN: Nags

    tinythemes demo

    A second maintenance release of our still young-ish package tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on the left) with the one added by this package (on the right):

    This version responds solely to things CRAN now nags about. As these are all package quality improvement we generally oblige happily (and generally fix in the respective package repo when we notice). I am currently on a quest to get most/all of my nags down so new releases are sometimes the way to go even when not under a ‘deadline’ gun (as with other releases this week).

    The full set of changes since the last release (a little over a year ago) follows.

    Changes in tinythemes version 0.0.3 (2025-03-29)

    • Updated a badge URL in README.md

    • Updated manual pages with proper anchor links

    • Rewrote one example without pipe to not require minimum R version

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianPetter Reinholdtsen: Theora 1.2.0 released

    Following the 1.2.0beta1 release two weeks ago, a final 1.2.0 release of theora was wrapped up today. This new release is tagged in the Xiph gitlab theora instance and you can fetch it from the Theora home page as soon as someone with access find time to update the web pages. In the mean time (automatically removed after 14 days) the release tarball is also available as a git build artifact from CI build of the release tag.

    The list of changes since The 1.2.0beta release from the CHANGES file in the tarball look like this:

    libtheora 1.2.0 (2025 March 29)

    • Bumped minor SONAME versions as oc_comment_unpack() implementation changed.
    • Added example wrapper script encoder_example_ffmpeg (#1601 #2336).
    • Improve comment handling on platforms where malloc(0) return NULL (#2304).
    • Added pragma in example code to quiet clang op precedenca warnings.
    • Adjusted encoder_example help text.
    • Adjusted README, CHANGES, pkg-config and spec files to better reflect current release (#2331 #2328).
    • Corrected english typos in source and build system.
    • Switched http links to https in doc and comments where relevant. Did not touch RFC drafts.

    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    365 TomorrowsThe Memory Hour

    Author: John Adinolfi Caleb lived alone, as did Cole. Caleb by circumstance, Cole by choice. Trina had entertained a variety of live-in partners, but all were short associations. She lived alone. Each of their homes was unexceptional, except for sharing an extraordinary view of the Pacific below. Sitting on the edge of a cliff, surrounded […]

    The post The Memory Hour appeared first on 365tomorrows.

    Planet DebianReproducible Builds (diffoscope): diffoscope 293 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 293. This version includes the following changes:

    [ Chris Lamb ]
    * Correct import masking issue.
    

    You find out more by visiting the project homepage.

    Planet DebianReproducible Builds (diffoscope): diffoscope 292 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 292. This version includes the following changes:

    [ Ivan Trubach ]
    * Ignore st_size entry for directories to avoid spurious diffs as this value
      is essentially filesystem dependent.
    
    [ Chris Lamb ]
    * Update copyright years.
    

    You find out more by visiting the project homepage.

    ,

    Planet DebianIan Jackson: Rust is indeed woke

    Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).

    I’m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.

    Community

    The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better.

    And this is well-trodden ground. I have something more interesting to say:

    Technological values - particularly, compared to C/C++

    Rust is woke technology that embodies a woke understanding of what it means to be a programming language.

    Ostensible values

    Let’s start with Rust’s strapline:

    A language empowering everyone to build reliable and efficient software.

    Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).

    Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

    This is all very airy-fairy, but it has concrete consequences:

    Attitude to the programmer’s mistakes

    In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.

    If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

    This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C’s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.

    These aren’t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:

    Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.

    Sound familiar?

    The ideology of the hardcore programmer

    Programming has long suffered from the myth of the “rockstar”. Silicon Valley techbro culture loves this notion.

    In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.

    The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn’t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.

    These “rockstars” want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn’t important.

    Sound familiar?

    Memory safety as a power struggle

    Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.

    Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)

    The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.

    Sound familiar?

    Memory safety via Rust as a power struggle

    Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) “rockstars”.

    So established C programmer “experts” are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.

    Sound familiar?

    Notes

    This is not a RIIR manifesto

    I’m not saying we should rewrite all the world’s C in Rust. We should not try to do that.

    Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we’re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.

    But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.

    Disclosure

    I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I’ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).

    I like Rust because I care that the software I write actually works: I care that my code doesn’t do harm in the world.

    On the meaning of “woke”

    The original meaning of “woke” is something much more specific, to do with racism. For the avoidance of doubt, I don’t think Rust is particularly antiracist.

    I’m using “woke” (like Rust’s opponents are) in the much broader, and now much more prevalent, culture wars sense.

    Pithy conclusion

    If you’re a senior developer who knows only C/C++, doesn’t want their authority challenged, and doesn’t want to have to learn how to write better software, you should hate Rust.

    Also you should be fired.


    Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".



    comment count unavailable comments

    Planet DebianDirk Eddelbuettel: RcppArmadillo 14.4.1-1 on CRAN: Small Upstream Fix

    armadillo image

    Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1236 other packages on CRAN, downloaded 39 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 620 times according to Google Scholar.

    This release brings a small upstream bug fix to the two FFTW3-interfacing functions, something not likely to hit many CRAN packages.

    The changes since the last and fairly recent CRAN release are summarised below.

    Changes in RcppArmadillo version 14.4.1-1 (2025-03-27)

    • Upgraded to Armadillo release 14.4.1 (Filtered Espresso)

      • Fix for fft() and ifft() when using FFTW3 in multi-threaded contexts (such as OpenMP)

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Cryptogram Friday Squid Blogging: Squid Werewolf Hacking Group

    In another rare squid/cybersecurity intersection, APT37 is also known as “Squid Werewolf.”

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Worse Than FailureError'd: Here Comes the Sun

    We got an unusual rash of submissions at Error'd this week. Here are five reasonably good ones chosen not exactly at random. For those few (everyone) who didn't catch the off-by-one from last week's batch, there's the clue.

    "Gotta CAPTCHA 'Em All," puns Alex G. "So do I select them all?" he wondered. I think the correct answer is null.

    1

     

    "What does a null eat?" wondered B.J.H , "and is one null invited or five?". The first question is easily answered. NaaN, of course. Probably garlic. I would expect B.J. to already know the eating habits of a long-standing companion, so I am guessing that the whole family is not meant to tag along. Stick with just the one.

    3

     

    Planespotter Rick R. caught this one at the airport. "Watching my daughter's flight from New York and got surprised by Boeing's new supersonic 737 having already arrived in DFW," he observed. I'm not quite sure what went wrong. It's not the most obvious time zone mistake I can imagine, but I'm pretty sure the cure is the same: all times displayed in any context that is not purely restricted to a single location (and short time frame) should explicitly include the relevant timezone.

    2

     

    Rob H. figures "From my day job's MECM Software Center. It appears that autocorrect has miscalculated, because the internet cannot be calculated." The internet is -1.

    4

     

    Ending this week on a note of hope, global warrior Stewart may have just saved the planet. "Climate change is solved. We just need to replicate the 19 March performance of my new solar panels." Or perhaps I miscalculated.

    0

     

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    365 TomorrowsWhen Next the Fractals Bloom

    Author: Hillary Lyon With a well-worn key in hand, Bonnie unlocked the massive front door of her great-uncle Duran’s house. The place sat unoccupied since his passing; it had taken forever for his will to slog through probate. She’d been his favorite family member, and he, hers. His death made her face her own mortality; […]

    The post When Next the Fractals Bloom appeared first on 365tomorrows.

    xkcdTerror Bird

    Planet DebianJohn Goerzen: Why You Should (Still) Use Signal As Much As Possible

    As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks.

    The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around.

    Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic.

    So let’s dive in. I’ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea.

    This post isn’t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

    What makes communications secure?

    When most people are talking about secure communications, they mean some combination of these properties:

    1. Privacy - nobody except the intended recipient can decode a message.
    2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
    3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
    4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.

    If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can’t really have privacy without authentication.

    I’ll have more to say about these later. For now, let’s discuss attack scenarios.

    What compromises security?

    There are a number of ways that security can be compromised. Let’s think through some of them:

    Communications infrastructure snooping

    Let’s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?

    • The owner of the coffee shop’s WiFi
    • The coffee shop’s Internet provider
    • The recipient’s Internet provider
    • Any Internet providers along the network between the sender and the recipient
    • Any government or institution that can compel any of the above to hand over copies of the traffic
    • Any hackers that compromise any of the above systems

    Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing.

    Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people’s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity).

    Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient’s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone’s retention.

    So defenses against this involve things like:

    • Strong end-to-end encryption, so no intermediate party – even the people that make the app – can snoop on it.
    • Using strong authentication of your peers
    • Taking steps to prevent even app developers from being able to see your contact list or communication history

    You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks.

    When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it – even if they never open it or attempt to peak inside – will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal’s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

    Device compromise

    Let’s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn’t take away all of them.

    What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what?

    An even simpler attack doesn’t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right?

    Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later.

    An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on – but still, it protects against a wide variety of attacks.

    Untrustworthy communication partner

    Perhaps you are sending sensitive information to a contact, but that person doesn’t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

    Environmental compromise

    Perhaps your device is secure, but a hidden camera still captures what’s on your screen. You can take some steps against things like this, of course.

    Human error

    Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

    Protecting yourself

    So how can you protect yourself against these attacks? Let’s consider:

    • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can’t access your messages
    • Keep your software and phone up-to-date
    • Be careful about phishing attacks and who you add to chat rooms
    • Be aware of your surroundings; don’t send sensitive messages where people might be looking over your shoulder with their eyes or cameras

    There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your “secure” laptop, it wouldn’t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?)

    But, that approach is hard to use. Many people aren’t familiar with GnuPG. You don’t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn’t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier.

    Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available.

    Signal is also open source; you don’t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it’s not federated, I previously addressed that.

    Government use

    If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised.

    I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal’s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn’t have been possible.

    This doesn’t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it.

    And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

    Conclusion

    Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history?

    I say no. So, go install Signal. It’s the best, most practical tool we have.


    This post is also available on my website, where it may be periodically updated.

    Planet DebianDirk Eddelbuettel: tint 0.1.5 on CRAN: Maintenance

    A new version 0.1.5 of the tint package arrived at CRAN today. tint provides a style ‘not unlike Tufte’ for use in html and pdf documents created from markdown. The github repo shows several examples in its README, more as usual in the package documentation.

    This is the first release in one and a half years and contains only routine maintenance. As CRAN now bags about missing anchors in the common \code{\link{}} use, I added these today. Otherwise the usual mix of updates to continuous integration, to badges and URLs and other small packaging details—but nothing user-facing.

    Changes in tint version 0.1.5 (2025-03-27)

    • Standard package maintenance for continuous integration, URL updates, and packaging conventions

    • Correct two minor nags in the Rd file

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, February 2025 (by Roberto C. Sánchez)

    Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

    Debian LTS contributors

    In February, 18 contributors have been paid to work on Debian LTS, their reports are available:

    • Abhijith PA did 10.0h (out of 8.0h assigned and 6.0h from previous period), thus carrying over 4.0h to the next month.
    • Adrian Bunk did 12.0h (out of 0.0h assigned and 63.5h from previous period), thus carrying over 51.5h to the next month.
    • Andrej Shadura did 10.0h (out of 6.0h assigned and 4.0h from previous period).
    • Bastien Roucariès did 20.0h (out of 20.0h assigned).
    • Ben Hutchings did 12.0h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 12.0h to the next month.
    • Chris Lamb did 18.0h (out of 18.0h assigned).
    • Daniel Leidert did 23.0h (out of 20.0h assigned and 6.0h from previous period), thus carrying over 3.0h to the next month.
    • Emilio Pozuelo Monfort did 53.0h (out of 53.0h assigned and 0.75h from previous period), thus carrying over 0.75h to the next month.
    • Guilhem Moulin did 11.0h (out of 3.25h assigned and 16.75h from previous period), thus carrying over 9.0h to the next month.
    • Jochen Sprickerhof did 27.0h (out of 30.0h assigned), thus carrying over 3.0h to the next month.
    • Lee Garrett did 11.75h (out of 9.5h assigned and 44.25h from previous period), thus carrying over 42.0h to the next month.
    • Markus Koschany did 40.0h (out of 40.0h assigned).
    • Roberto C. Sánchez did 7.0h (out of 14.75h assigned and 9.25h from previous period), thus carrying over 17.0h to the next month.
    • Santiago Ruano Rincón did 19.75h (out of 21.75h assigned and 3.25h from previous period), thus carrying over 5.25h to the next month.
    • Sean Whitton did 6.0h (out of 6.0h assigned).
    • Sylvain Beucler did 52.5h (out of 14.75h assigned and 39.0h from previous period), thus carrying over 1.25h to the next month.
    • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
    • Tobias Frost did 17.0h (out of 17.0h assigned).

    Evolution of the situation

    In February, we have released 38 DLAs.

    • Notable security updates:
      • pam-u2f, prepared by Patrick Winnertz, fixed an authentication bypass vulnerability
      • openjdk-17, prepared by Emilio Pozuelo Monfort, fixed an authorization bypass/information disclosure vulnerability
      • firefox-esr, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
      • thunderbird, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
      • postgresql-13, prepared by Christoph Berg, fixed an SQL injection vulnerability
      • freerdp2, prepared by Tobias Frost, fixed several vulnerabilities
      • openssh, prepared by Colin Watson, fixed a machine-in-the-middle vulnerability

    LTS contributors Emilio Pozuelo Monfort and Santiago Ruano Rincón coordinated the administrative aspects of LTS updates of postgresql-13 and pam-u2f, which were prepared by the respective maintainers, to whom we are most grateful.

    As has become the custom of the LTS team, work is under way on a number of package updates targeting Debian 12 (codename “bookworm”) with fixes for a variety of vulnerabilities. In February, Guilhem Moulin prepared an upload of sssd, while several other updates are still in progress. Bastien Roucariès prepared an upload of krb5 for unstable as well.

    Given the importance of the Debian Security Tracker to the work of the LTS Team, we regularly contribute improvements to it. LTS contributor Emilio Pozuelo Monfort reviewed and merged a change to improve performance, and then dealt with unexpected issues that arose as a result. He also made improvements in the processing of CVEs which are not applicable to Debian.

    Looking to the future (the release of Debian 13, codename “trixie”, and beyond), LTS contributor Santiago Ruano Rincón has initiated a conversation among the broader community involved in the development of Debian. The purpose of the discussion is to explore ways to improve the long term supportability of packages in Debian, specifically by focusing effort on ensuring that each Debian release contains the “best” supported upstream version of packages with a history of security issues.

    Thanks to our sponsors

    Sponsors that joined recently are in bold.

    ,

    LongNowBlaise Agüera y Arcas

    Blaise Agüera y Arcas

    In What is Intelligence?, Blaise Agüera y Arcas, VP, Fellow and CTO of Technology & Society at Google, explores what intelligence really is, and how AI’s emergence is a natural consequence of evolution. Encompassing decades of theory, existing literature, and recent artificial life experiments, Agüera y Arcas’ research argues that certain modern AI systems do indeed have a claim to intelligence, consciousness, and free will.

    This talk is presented as part of a larger project on What is Intelligence?, including a printed book alongside experimental formats which challenge the conventions of academic publishing. It is the inaugural collaborative work of Antikythera, a think tank on the philosophy of technology, and MIT Press, a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.

    Planet DebianScarlett Gately Moore: KDE Snap updates, Kubuntu Beta testing, Life updates!

    Help us Beta test Kubuntu Plucky Puffin!

    Kubuntu work:

    Fixed an issue in apparmor preventing QT6 webengine applications from starting.

    Beta testing!

    KDE Snaps:

    Updated Qt6 to 6.8.2

    Updated Kf6 6.11.0

    Rolling out 25.04 RC applications! You can find them in the –candidate channel!

    Life:

    I have decided to strike out on my own. I can’t take any more rejections! Honestly, I don’t blame them, I wouldn’t want a one armed engineer either. However, I have persevered and accomplished quite a bit with my one arm! So I have decided to take a leap of faith and with your support for open source work and a resurrected side gig of web development I will survive. If you can help sponsor my work, anything at all, even a dollar! I would be eternally grateful. I have several methods to do so:

    If you want your cool application packaged in a variety of formats please contact me!

    If you want focused help with an annoying bug, please contact me!

    Contact me for any and all kinds of help, if I can’t do it, I will say so.

    Do you need web work? Someone to maintain your website? I can do that too!

    Portfolio

    Thank you all for your support in this new adventure!

    Krebs on SecurityWhen Getting Phished Puts You in Mortal Danger

    Many successful phishing attacks result in a financial loss or malware infection. But falling for some phishing scams, like those currently targeting Russians searching online for organizations that are fighting the Kremlin war machine, can cost you your freedom or your life.

    The real website of the Ukrainian paramilitary group “Freedom of Russia” legion. The text has been machine-translated from Russian.

    Researchers at the security firm Silent Push mapped a network of several dozen phishing domains that spoof the recruitment websites of Ukrainian paramilitary groups, as well as Ukrainian government intelligence sites.

    The website legiohliberty[.]army features a carbon copy of the homepage for the Freedom of Russia Legion (a.k.a. “Free Russia Legion”), a three-year-old Ukraine-based paramilitary unit made up of Russian citizens who oppose Vladimir Putin and his invasion of Ukraine.

    The phony version of that website copies the legitimate site — legionliberty[.]army — providing an interactive Google Form where interested applicants can share their contact and personal details. The form asks visitors to provide their name, gender, age, email address and/or Telegram handle, country, citizenship, experience in the armed forces; political views; motivations for joining; and any bad habits.

    “Participation in such anti-war actions is considered illegal in the Russian Federation, and participating citizens are regularly charged and arrested,” Silent Push wrote in a report released today. “All observed campaigns had similar traits and shared a common objective: collecting personal information from site-visiting victims. Our team believes it is likely that this campaign is the work of either Russian Intelligence Services or a threat actor with similarly aligned motives.”

    Silent Push’s Zach Edwards said the fake Legion Liberty site shared multiple connections with rusvolcorps[.]net. That domain mimics the recruitment page for a Ukrainian far-right paramilitary group called the Russian Volunteer Corps (rusvolcorps[.]com), and uses a similar Google Forms page to collect information from would-be members.

    Other domains Silent Push connected to the phishing scheme include: ciagov[.]icu, which mirrors the content on the official website of the U.S. Central Intelligence Agency; and hochuzhitlife[.]com, which spoofs the Ministry of Defense of Ukraine & General Directorate of Intelligence (whose actual domain is hochuzhit[.]com).

    According to Edwards, there are no signs that these phishing sites are being advertised via email. Rather, it appears those responsible are promoting them by manipulating the search engine results shown when someone searches for one of these anti-Putin organizations.

    In August 2024, security researcher Artem Tamoian posted on Twitter/X about how he received startlingly different results when he searched for “Freedom of Russia legion” in Russia’s largest domestic search engine Yandex versus Google.com. The top result returned by Google was the legion’s actual website, while the first result on Yandex was a phishing page targeting the group.

    “I think at least some of them are surely promoted via search,” Tamoian said of the phishing domains. “My first thread on that accuses Yandex, but apart from Yandex those websites are consistently ranked above legitimate in DuckDuckGo and Bing. Initially, I didn’t realize the scale of it. They keep appearing to this day.”

    Tamoian, a native Russian who left the country in 2019, is the founder of the cyber investigation platform malfors.com. He recently discovered two other sites impersonating the Ukrainian paramilitary groups — legionliberty[.]world and rusvolcorps[.]ru — and reported both to Cloudflare. When Cloudflare responded by blocking the sites with a phishing warning, the real Internet address of these sites was exposed as belonging to a known “bulletproof hosting” network called Stark Industries Solutions Ltd.

    Stark Industries Solutions appeared two weeks before Russia invaded Ukraine in February 2022, materializing out of nowhere with hundreds of thousands of Internet addresses in its stable — many of them originally assigned to Russian government organizations. In May 2024, KrebsOnSecurity published a deep dive on Stark, which has repeatedly been used to host infrastructure for distributed denial-of-service (DDoS) attacks, phishing, malware and disinformation campaigns from Russian intelligence agencies and pro-Kremlin hacker groups.

    In March 2023, Russia’s Supreme Court designated the Freedom of Russia legion as a terrorist organization, meaning that Russians caught communicating with the group could face between 10 and 20 years in prison.

    Tamoian said those searching online for information about these paramilitary groups have become easy prey for Russian security services.

    “I started looking into those phishing websites, because I kept stumbling upon news that someone gets arrested for trying to join [the] Ukrainian Army or for trying to help them,” Tamoian told KrebsOnSecurity. “I have also seen reports [of] FSB contacting people impersonating Ukrainian officers, as well as using fake Telegram bots, so I thought fake websites might be an option as well.”

    Search results showing news articles about people in Russia being sentenced to lengthy prison terms for attempting to aid Ukrainian paramilitary groups.

    Tamoian said reports surface regularly in Russia about people being arrested for trying carry out an action requested by a “Ukrainian recruiter,” with the courts unfailingly imposing harsh sentences regardless of the defendant’s age.

    “This keeps happening regularly, but usually there are no details about how exactly the person gets caught,” he said. “All cases related to state treason [and] terrorism are classified, so there are barely any details.”

    Tamoian said while he has no direct evidence linking any of the reported arrests and convictions to these phishing sites, he is certain the sites are part of a larger campaign by the Russian government.

    “Considering that they keep them alive and keep spawning more, I assume it might be an efficient thing,” he said. “They are on top of DuckDuckGo and Yandex, so it unfortunately works.”

    Further reading: Silent Push report, Russian Intelligence Targeting its Citizens and Informants.

    Cryptogram AIs as Trusted Third Parties

    This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties:

    Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them.

    When I was writing Applied Cryptography way back in 1993, I talked about human trusted third parties (TTPs). This research postulates that someday AIs could fulfill the role of a human TTP, with added benefits like (1) being able to audit their processing, and (2) being able to delete it and erase their knowledge when their work is done. And the possibilities are vast.

    Here’s a TTP problem. Alice and Bob want to know whose income is greater, but don’t want to reveal their income to the other. (Assume that both Alice and Bob want the true answer, so neither has an incentive to lie.) A human TTP can solve that easily: Alice and Bob whisper their income to the TTP, who announces the answer. But now the human knows the data. There are cryptographic protocols that can solve this. But we can easily imagine more complicated questions that cryptography can’t solve. “Which of these two novel manuscripts has more sex scenes?” “Which of these two business plans is a riskier investment?” If Alice and Bob can agree on an AI model they both trust, they can feed the model the data, ask the question, get the answer, and then delete the model afterwards. And it’s reasonable for Alice and Bob to trust a model with questions like this. They can take the model into their own lab and test it a gazillion times until they are satisfied that it is fair, accurate, or whatever other properties they want.

    The paper contains several examples where an AI TTP provides real value. This is still mostly science fiction today, but it’s a fascinating thought experiment.

    Planet DebianBits from Debian: Viridien Platinum Sponsor of DebConf25

    viridien-logo

    We are pleased to announce that Viridien has committed to sponsor DebConf25 as a Platinum Sponsor.

    Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future.

    Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

    As a Platinum Sponsor, Viridien is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Viridien contributes to strengthen the community that collaborates on the Debian project from all around the world throughout all of the year.

    Thank you very much, Viridien, for your support of DebConf25!

    Become a sponsor too!

    DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

    DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

    Worse Than FailureA Bracing Way to Start the Day

    Barry rolled into work at 8:30AM to see the project manager waiting at the door, wringing her hands and sweating. She paced a bit while Barry badged in, and then immediately explained the issue:

    Today was a major release of their new features. This wasn't just a mere software change; the new release was tied to major changes to a new product line- actual widgets rolling off an assembly line right now. And those changes didn't work.

    "I thought we tested this," Barry said.

    "We did! And Stu called in sick today!"

    Stu was the senior developer on the project, who had written most of the new code.

    "I talked to him for a few minutes, and he's convinced it's a data issue. Something in the metadata or something?"

    "I'll take a look," Barry said.

    He skipped grabbing a coffee from the carafe and dove straight in.

    Prior to the recent project, the code had looked something like this:

    if (IsProduct1(_productId))
    	_programId = 1;
    elseif (IsProduct2(_productId))
    	_programId = 2;
    elseif (IsProduct3(_productId))
    	_programId = 3;
    

    Part of the project, however, was about changing the workflow for "Product 3". So Stu had written this code:

    if (IsProduct1(_productId))
    	_programId = 1;
    else if (IsProduct2(_productId))
    	_programId = 2;
    else if (IsProduct3(_productId))
    	_programId = 3;
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    

    Since this is C# and not Python, it took Barry all of 5 seconds to spot this and figure out what the problem was and fix it:

    if (IsProduct1(_productId))
    {
    	_programId = 1;
    }
    else if (IsProduct2(_productId))
    {
    	_programId = 2;
    }
    else if (IsProduct3(_productId))
    {
    	_programId = 3;
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    }
    

    This brings us to about 8:32. Now, given the problems, Barry wasn't about to just push this change- in addition to running pipeline tests (and writing tests that Stu clearly hadn't), he pinged the head of QA to get a tester on this fix ASAP. Everyone worked quickly, and that meant by 9:30 the fix was considered good and ready to be merged in and pushed to production. Sometime in there, while waiting for a pipeline to complete, Barry managed to grab a cup of coffee to wake himself up.

    While Barry was busy with that, Stu had decided that he wasn't feeling that sick after all, and had rolled into the office around 9:00. Which meant that just as Barry was about to push the button to run the release pipeline, an "URGENT" email came in from Stu.

    "Hey, everybody, I fixed that bug. Can we get this released ASAP?"

    Barry went ahead and released the version that he'd already tested, but out of morbid curiosity, went and checked Stu's fix.

    if (IsProduct1(_productId))
    	_programId = 1;
    else if (IsProduct2(_productId))
    	_programId = 2;
    else if (IsProduct3(_productId))
    {
    	_programId = 3;
    }
    
    if (IsProduct3(_productId))
    {
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    }
    

    At least this version would have worked, though I'm not sure Stu fully understands what "{}"s mean in C#. Or in most programming languages, if we're being honest.

    With Barry's work, the launch went off just a few minutes later than the scheduled time. Since the launch was successful, at the next company "all hands", the leadership team made sure to congratulate the people instrumental in making it happen: that is to say, the lead developer of the project, Stu.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsThe Button

    Author: Alastair Millar Mandy was pretty, vivacious, and my next door neighbour; she’d pop round evenings or at weekends while my spouse was at work to swap gossip, recipes and just chat. But Marco didn’t mind – “you’re such a cliché,” he’d say, laughing, “her gay best friend!”. She was smart, too. Occasionally she’d tell […]

    The post The Button appeared first on 365tomorrows.

    ,

    Planet DebianDirk Eddelbuettel: RcppRedis 0.2.5 on CRAN: Fix Bashism in Configure, Maintenance

    A new minor release 0.2.5 of our RcppRedis package arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has been “deployed in production” as a risk / monitoring tool on a trading floor for several years. It also supports pub/sub dissemination of streaming market data as per this earlier example.

    Given the changes around Redis, it is worth stressing that the package works just as well with valkey – and uses only hiredis which remains proper open source.

    This update is again somewhat mechanical as a few maintenance things bubbled up in since the last release in the summer of 2023. As with other packages, continuous integration was updated a few times as were URLs and badges, and we update the use of our RApiSerialize. And, just as we did today and yesterday with littler, RQuantLib and RDieHarder, this addresses a nag from CRAN about implicit bash dependency in configure.ac (which we fixed in January as for the other packages but are under deadline now). Last but not least the newly-added extension of ‘forbidden’ symbols in static libraries revealed that yes indeed I had forgotten to set -DNDEBUG when building the embedded hiredis in fallback mode—and I now also converted its four uses of sprintf to snprintf so we are clean there too.

    The detailed changes list follows.

    Changes in version 0.2.5 (2025-03-26)

    • The continuous integration setup was updated several times

    • Badges and URLs in README.md have been updated

    • An updated interface from RApiSerialize is now used, and a versioned dependency on version 0.1.4 or later has been added

    • The DESCRIPTION file now uses Authors@R

    • Two possible bashisms have been converted in configure.ac

    • The (fallback if needed) build of libhiredis.a now sets -DNDEBUG, four uses of sprintf converted to snprintf

    Courtesy of my CRANberries, there is also a diffstat report for this this release. More information is on the RcppRedis page and at the repository and its issue tracker.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianDirk Eddelbuettel: RDieHarder 0.2.7 on CRAN: Fix Bashism in Configure, Maintenance

    An new version 0.2.7 of the random-number generator tester RDieHarder (based on the DieHarder suite developed / maintained by Robert Brown with contributions by David Bauer and myself along with other contributors) is now on CRAN.

    This release contains only internal maintenance changes: continuous integration was updated as was a badge URL, and a ‘bashism’ issue in configure.ac was addressed (months ago) but as CRAN now sends NOTEs it triggered this update (just like littler and RQuantLib yesterday).

    Thanks to CRANberries, you can also look at the most recent diff to the previous release.

    If you like this or other open-source work I do, you can now sponsor me at GitHub.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Cryptogram A Taxonomy of Adversarial Machine Learning Attacks and Mitigations

    NIST just released a comprehensive taxonomy of adversarial machine learning attacks and countermeasures.

    Worse Than FailureRepresentative Line: Time for Identification

    If you need a unique ID, UUIDs provide a variety of options. It's worth noting that variants 1, 2, and 7 all incorporate a timestamp into the UUID. In the case of variant 7, this has the benefit of making the UUID sortable, which can be convenient in many cases (v1/v2 incorporate a MAC address which means that they're sortable if generated with the same NIC).

    I bring this up because Dave inherited some code written by a "guru". Said guru was working before UUIDv7 was a standard, but also didn't have any problems that required sortable UUIDs, and thus had no real reason to use timestamp based UUIDs. They just needed some random identifier and, despite using C#, didn't use the UUID functions built in to the framework. No, they instead did this:

    string uniqueID = String.Format("{0:d9}", (DateTime.UtcNow.Ticks / 10) % 1000000000);
    

    A Tick is 100 nanoseconds. We divide that by ten, mod by a billion, and then call that our unique identifier.

    This is, as you might guess, not unique. First there's the possibility of timestamp collisions: generating two of these too close together in time would collide. Second, the math is just complete nonsense. We divide Ticks by ten (converting hundreds of nanoseconds into thousands of nanoseconds), then we mod by a billion. So every thousand seconds we loop and have a risk of collision again?

    Maybe, maybe, these are short-lived IDs and a thousand seconds is plenty of time. But even if that's true, none of this is a good way to do that.

    I suppose the saving grace is they use UtcNow and not Now, thus avoiding situations where collisions also happen because of time zones?

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsIllegal Astralgants

    Author: David C. Nutt I had been working on lucid dreaming off and on for about a year. I never believed the goofier ends of the equation- alternate realities, astral projection, and all that other New Age hooey. All I wanted to do was control my own dream space. Maybe have my own “Grand Theft […]

    The post Illegal Astralgants appeared first on 365tomorrows.

    xkcdRock Identification

    Planet DebianDirk Eddelbuettel: crc32c 0.0.3 on CRAN: Accommodate Unreleased (!!) cmake Version

    A third release of the crc32c package is now on CRAN. The package bring the Google library crc32c to R and offers cyclical checksums with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64.

    This release is one hundred percent maintenance. Brian Ripley reached out as he already tests the (still very much unreleased) cmake 4.0.0 release, currently at rc5. And that version is now picky about minimum cmake version statements in CMakeLists.txt. As we copied the upstream one here, with its setting of the jurassic 3.1 version, our build conked out. A simple switch to 3.5..4.0, declaring a ‘from .. to’ scheme with a minimally supported version (here 3.5, released in 2016) up to a tested version works. No other changes, really, besides an earlier helping hand from Jeroen concerning cross-compilation support he needed or encountered (and that happened right after the 0.0.2 release).

    The NEWS entry for this (as well the initial release) follows.

    Changes in version 0.0.3 (2025-03-25)

    • Support cross-compilation by setting CC and CXX in Makevars.in (Jeroen Ooms in #1)

    • Support pre-release 4.0.0 of cmake by moving the minimum stated version from 3.1 to 3.5 per CRAN request, also sent PR upstream

    My CRANberries service provides a comparison to the previous release. The code is available via the GitHub repo, and of course also from its CRAN page and via install.packages("crc32c"). Comments and suggestions are welcome at the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    ,

    Planet DebianDirk Eddelbuettel: littler 0.3.21 on CRAN: Lots Moar Features!

    max-heap image

    The twentysecond release of littler as a CRAN package landed on CRAN just now, following in the now nineteen year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.

    littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years.

    littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

    This release, the first is almost exactly one year, brings enhancements to six scripts as well as three new ones. The new ones crup.r offers ‘CRan UPloads’ from the command-line, deadliners.r lists CRAN package by CRAN deadline, and wb.r uploads to win-builder (replacing an older shell script of mine). Among the updated ones, kitten.r now creates more complete DESCRIPTION files in the packages it makes, and several scripts support additional options. A number of changes were made to packaging as well, some of which were contributed by Jon and Michael which is of course always greatly appreciated. The trigger for the release was, just like for RQuantLib earlier today, a CRAN nag on ‘bashisms’ half of which actually false here it was in a comment only. Oh well.

    The full change description follows.

    Changes in littler version 0.3.21 (2025-03-24)

    • Changes in examples scripts

      • Usage text for ciw.r is improved, new options were added (Dirk)

      • The ‘noble’ release is supported by r2u.r (Dirk)

      • The installRub.r script has additional options (Dirk)

      • The ttlt.r script has a new load_package argument (Dirk)

      • A new script deadliners.r showing CRAN packages 'under deadline' has been added, and then refined (Dirk)

      • The kitten.r script can now use whoami and argument githubuser on the different *kitten helpers it calls (Dirk)

      • A new script wb.r can upload to win-builder (Dirk)

      • A new script crup.r can upload a CRAN submission (Dirk)

      • In rcc.r, the return from rcmdcheck is now explicitly printed (Dirk)

      • In r2u.r the dry-run option is passed to the build command (Dirk)

    • Changes in package

      • Regular updates to badges, continuous integration, DESCRIPTION and configure.ac (Dirk)

      • Errant osVersion return value are handled more robustly (Michael Chirico in #121)

      • The current run-time path is available via variable LITTLER_SCRIPT_PATH (Jon Clayden in #122)

      • The cleanup script remove macOS debug symbols (Jon Clayden in #123)

    My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Cryptogram AI Data Poisoning

    Cloudflare has a new feature—available to free users as well—that uses AI to generate random pages to feed to AI web crawlers:

    Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

    “When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

    The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—­such as neutral information about biology, physics, or mathematics—­to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven).

    It’s basically an AI-generated honeypot. And AI scraping is a growing problem:

    The scale of AI crawling on the web appears substantial, according to Cloudflare’s data that lines up with anecdotal reports we’ve heard from sources. The company says that AI crawlers generate more than 50 billion requests to their network daily, amounting to nearly 1 percent of all web traffic they process. Many of these crawlers collect website data to train large language models without permission from site owners….

    Presumably the crawlers will now have to up both their scraping stealth and their ability to filter out AI-generated content like this. Which means the honeypots will have to get better at detecting scrapers and more stealthy in their fake content. This arms race is likely to go back and forth, wasting a lot of energy in the process.

    Planet DebianDirk Eddelbuettel: RQuantLib 0.4.25 on CRAN: Fix Bashism in Configure

    A new minor release 0.4.25 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.

    QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.

    This release of RQuantLib is tickled by a request to remove ‘bashisms’ in shell scripts, or, as in my case here, configure.ac where I used the non-portable form of string comparison. That has of course been there for umpteen years and not bitten anyone as the default shell for most is in fact bash but the change has the right idea. And is of course now mandatory affecting quite a few packages is I tooted yesterday. It also contains an improvement to the macOS 14 build kindly contributed by Jeroen.

    Changes in RQuantLib version 0.4.25 (2025-03-24)

    • Support macOS 14 with a new compiler flag (Jeroen in #190)

    • Correct two bashisms in configure.ac

    One more note, though: This may however be the last release I make with Windows support. CRAN now also checks for ‘forbidden’ symbols (such as assert or (s)printf or …) in static libraries, and this release tickled one such warning from the Windows side (which only uses static libraries). I have no desire to get involved in also maintaing QuantLib (no R here) for Windows and may simply turn the package back to OS_type: unix to avoid the hassle. To avoid that, it would be fabulous if someone relying on RQuantLib on Windows could step up and lend a hand looking after that library build.

    Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

    If you like this or other open-source work I do, you can now sponsor me at GitHub.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    365 TomorrowsRear Window

    Author: Majoki Juan Dalderis was the creator of LinkJuice, the uber energy drink of the Internet, the black gold, the Texas Tea of web traffic. He could make or break any web platform or presence. He had the power of a techno god, but his mortal self fell seriously ill. A listeria-tainted cantaloupe left him […]

    The post Rear Window appeared first on 365tomorrows.

    Worse Than FailureRepresentative Line: Tern Down a Date

    Today's anonymous submitter has managed to find a way to do date formatting wrong that I don't think I've seen yet. That's always remarkable. Like most such bad code, it checks string lengths and then adds a leading zero, if needed. It's not surprising, but again, it's all in the details:

    // convert date string to yyyy/MM/DD
    return dtmValue.Year + "-" + ((dtmValue.Month.ToString().Length == 1)?  ("0" + dtmValue.Month.ToString()): dtmValue.Month.ToString()) + "-" + ((dtmValue.Day.ToString().Length == 1)? ("0" + dtmValue.Day.ToString()): dtmValue.Day.ToString());
    

    This is only one line, but it has it all, doesn't it. First, we've got good ol' Hungarian notation, which conveys no useful information here. We've got a comment which tells us the code outputs /es, but the code actually outputs -. We've got ternaries that are definitely not helping readability here, plus repeated calls to ToString() instead of maybe just storing the result in a variable.

    And, for the record, dtmValue.ToString("yyyy-MM-dd") would have done the correct thing.

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    Planet DebianOtto Kekäläinen: Debian Salsa CI in Google Summer of Code 2025

    Featured image of post Debian Salsa CI in Google Summer of Code 2025

    Are you a student aspiring to participate in the Google Summer of Code 2025? Would you like to improve the continuous integration pipeline used at salsa.debian.org, the Debian GitLab instance, to help improve the quality of tens of thousands of software packages in Debian?

    This summer 2025, I and Emmanuel Arias will be participating as mentors in the GSoC program. We are available to mentor students who propose and develop improvements to the Salsa CI pipeline, as we are members of the Debian team that maintains it.

    A post by Santiago Ruano Rincón in the GitLab blog explains what Salsa CI is and its short history since inception in 2018. At the time of the article in fall 2023 there were 9000+ source packages in Debian using Salsa CI. Now in 2025 there are over 27,000 source packages in Debian using it, and since summer 2024 some Ubuntu developers have started using it for enhanced quality assurance of packaging changes before uploading new package revisions to Ubuntu. Personally, I have been using Salsa CI since its inception, and contributing as a team member since 2019. See my blog post about GitLab CI for MariaDB in Debian for a description of an advanced and extensive use case.

    Helping Salsa CI is a great way to make a global impact, as it will help avoid regressions and improve the quality of Debian packages. The benefits reach far beyond just Debian, as it will also help hundreds of Debian derivatives, such as Ubuntu, Linux Mint, Tails, Purism PureOS, Pop!_OS, Zorin OS, Raspberry Pi OS, a large portion of Docker containers, and even the Windows Subsystem for Linux.

    Improving Salsa CI: more features, robustness, speed

    While Salsa CI with contributions from 71 people is already quite mature and capable, there are many ideas floating around about how it could be further extended. For example, Salsa CI issue #147 describes various static analyzers and linters that may be generally useful. Issue #411 proposes using libfaketime to run autopkgtest on arbitrary future dates to test for failures caused by date assumptions, such as the Y2038 issue.

    There are also ideas about making Salsa CI more robust and code easier to reuse by refactoring some of the yaml scripts into independent scripts in #230, which could make it easier to run Salsa CI locally as suggested in #169. There are also ideas about improving the Salsa CI’s own CI to avoid regressions from pipeline changes in #318.

    The CI system is also better when it’s faster, and some speed improvement ideas have been noted in #412.

    Improvements don’t have to be limited to changes in the pipeline itself. A useful project would also be to update more Debian packages to use Salsa CI, and ensure they adopt it in an optimal way as noted in #416. It would also be nice to have a dashboard with statistics about all public Salsa CI pipeline runs as suggested in #413.

    These and more ideas can be found in the issue list by filtering for tags Newcomer, Nice-To-Have or Accepting MRs. A Google Summer of Code proposal does not have to be limited to these existing ideas. Participants are also welcome to propose completely novel ideas!

    Good time to also learn Debian packaging

    Anyone working with Debian team should also take the opportunity to learn Debian packaging, and contribute to the packaging or maintenance of 1-2 packages in parallel to improving the Salsa CI. All Salsa CI team members are also Debian Developers who can mentor and sponsor uploads to Debian.

    Maintaining a few packages is a great way to eat your own cooking and experience Salsa CI from the user perspective, and likely to make you better at Salsa CI development.

    Apply now!

    The contributor applications opened yesterday on March 24, so to participate act now! If you are an eligible student and want to attend, head over to summerofcode.withgoogle.com to learn more.

    There are over a thousand participating organizations, with Debian, GitLab and MariaDB being some examples. Within these organizations there may be multiple subteams and projects to choose from. The full list of participating Debian projects can be found in the Debian wiki.

    If you are interested in GSoC for Salsa CI specifically, feel free to

    1. Reach out to me and Emmanuel by email at otto@ and eamanu@ (debian.org).
    2. Sign up at salsa.debian.org for an account (note it takes a few days due to manual vetting and approval process)
    3. Read the project README, STRUCTURE and CONTRIBUTING to get a developer’s overview
    4. Participate in issue discussions at https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/

    Note that you don’t have to wait for GSoC to officially start to contribute. In fact, it may be useful to start immediately by submitting a Merge Request to do some small contribution, just to learn the process and to get more familiar with how everything works, and the team maintaining Salsa CI. Looking forward to seeing new contributors!

    ,

    Cryptogram More Countries are Demanding Backdoors to Encrypted Apps

    Last month, I wrote about the UK forcing Apple to break its Advanced Data Protection encryption in iCloud. More recently, both Sweden and France are contemplating mandating backdoors. Both initiatives are attempting to scare people into supporting backdoors, which are—of course—are terrible idea.

    Also: “A Feminist Argument Against Weakening Encryption.”

    Cryptogram Report on Paragon Spyware

    Citizen Lab has a new report on Paragon’s spyware:

    Key Findings:

    • Introducing Paragon Solutions. Paragon Solutions was founded in Israel in 2019 and sells spyware called Graphite. The company differentiates itself by claiming it has safeguards to prevent the kinds of spyware abuses that NSO Group and other vendors are notorious for.
    • Infrastructure Analysis of Paragon Spyware. Based on a tip from a collaborator, we mapped out server infrastructure that we attribute to Paragon’s Graphite spyware tool. We identified a subset of suspected Paragon deployments, including in Australia, Canada, Cyprus, Denmark, Israel, and Singapore.
    • Identifying a Possible Canadian Paragon Customer. Our investigation surfaced potential links between Paragon Solutions and the Canadian Ontario Provincial Police, and found evidence of a growing ecosystem of spyware capability among Ontario-based police services.
    • Helping WhatsApp Catch a Zero-Click. We shared our analysis of Paragon’s infrastructure with Meta, who told us that the details were pivotal to their ongoing investigation into Paragon. WhatsApp discovered and mitigated an active Paragon zero-click exploit, and later notified over 90 individuals who it believed were targeted, including civil society members in Italy.
    • Android Forensic Analysis: Italian Cluster. We forensically analyzed multiple Android phones belonging to Paragon targets in Italy (an acknowledged Paragon user) who were notified by WhatsApp. We found clear indications that spyware had been loaded into WhatsApp, as well as other apps on their devices.
    • A Related Case of iPhone Spyware in Italy. We analyzed the iPhone of an individual who worked closely with confirmed Android Paragon targets. This person received an Apple threat notification in November 2024, but no WhatsApp notification. Our analysis showed an attempt to infect the device with novel spyware in June 2024. We shared details with Apple, who confirmed they had patched the attack in iOS 18.
    • Other Surveillance Tech Deployed Against The Same Italian Cluster. We also note 2024 warnings sent by Meta to several individuals in the same organizational cluster, including a Paragon victim, suggesting the need for further scrutiny into other surveillance technology deployed against these individuals.

    Planet DebianJonathan McDowell: Who pays the cost of progress in software?

    I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt.

    My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it’s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don’t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else’s problem).

    There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt.

    I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we’re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It’s not quite as extreme as rolling out a new service, because it’s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors.

    A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it’s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn’t managed to catch everyone who might be affected, so by the time it’s discovered it’s release critical, because at least one package no longer builds in unstable.

    Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it’s easier to track the interconnections between different components. Debian doesn’t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I’m going to have to figure out what changed elsewhere to cause it. However he’s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem.

    I don’t know if I have a point to this post. I think it’s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don’t then allow a tardy developer to block progress.

    Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2025)

    The following contributors got their Debian Developer accounts in the last two months:

    • Bo Yu (vimer)
    • Maytham Alsudany (maytham)
    • Rebecca Natalie Palmer (mpalmer)

    The following contributors were added as Debian Maintainers in the last two months:

    • NoisyCoil
    • Arif Ali
    • Julien Plissonneau Duquène
    • Maarten Van Geijn
    • Ben Collins

    Congratulations!

    Planet DebianSimon Josefsson: Reproducible Software Releases

    Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:

    • Release artifacts should be built in a way that can be reproduced by others
    • It should be possible to build a project from source tarball that doesn’t contain any generated or vendor files (e.g., in the style of git-archive).

    While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.

    I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.

    I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.

    What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.

    First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:

    Version Handling

    To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:

    $Format:%(describe)$

    With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

    Translations

    We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:

    # Prime po/*.po from fall-back copy stored in git.
    for poin in po/*.po.in; do
        po=$(echo $poin | sed 's/.in//')
        test -f $po || cp -v $poin $po
    done
    ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS

    Fetching vendor files like gnulib

    Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:

    GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
    wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
    gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
    rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
    export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
    ./bootstrap --no-git
    ./configure
    make

    Test the git-archive tarball

    This goes without saying, but if you don’t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

    Mission Accomplished

    So that wasn’t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it.

    I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.

    Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

    Build dependencies causing different generated content

    The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

    Timestamps

    Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:

    dist-hook: po-CreationDate-to-mtime-NEWS
    .PHONY: po-CreationDate-to-mtime-NEWS
    po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
      $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
        if test -f "$$p"; then \
          $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
          if cmp $$p $$p.tmp > /dev/null; then \
            rm -f $$p.tmp; \
          else \
            mv $$p.tmp $$p; \
          fi \
        fi \
      done

    Similarily, I set a predictable modification time of the texinfo source file like this:

    dist-hook: mtime-NEWS-to-git-HEAD
    .PHONY: mtime-NEWS-to-git-HEAD
    mtime-NEWS-to-git-HEAD:
      $(AM_V_GEN)if test -e $(srcdir)/.git \
                    && command -v git > /dev/null; then \
        touch -m -t "$$(git log -1 --format=%cd \
          --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
      fi

    However I’ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running ‘make dist‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.

    The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

    ChangeLog

    Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist” source archives from a minimal “git-archive” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.

    dist-hook: gen-ChangeLog
    .PHONY: gen-ChangeLog
    gen-ChangeLog:
      $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
        LC_ALL=en_US.UTF-8 TZ=UTC0					\
        $(top_srcdir)/build-aux/gitlog-to-changelog			\
           --srcdir=$(srcdir) --					\
           v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
           { printf '\n\nSee the source repo for older entries\n'	\
             >> $(distdir)/cl-t &&					\
             rm -f $(distdir)/ChangeLog &&				\
             mv $(distdir)/cl-t $(distdir)/ChangeLog; }		\
      fi

    I’m undecided about the usefulness of generated ChangeLog files within ‘make dist‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

    Long-term reproducible trusted build environment

    Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

    Embedded images in Texinfo manual

    For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:

    info_TEXINFOS = libidn.texi
    libidn_TEXINFOS += libidn-components.png
    imagesdir = $(infodir)
    images_DATA = libidn-components.png
    EXTRA_DIST += components.dot
    DISTCLEANFILES = \
      libidn-components.eps libidn-components.png libidn-components.pdf
    libidn-components.eps: $(srcdir)/components.dot
      $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
      $(AM_V_at)! grep %%CreationDate $@.tmp
      $(AM_V_at)mv $@.tmp $@
    libidn-components.pdf: $(srcdir)/components.dot
      $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
    # A simple sed on CreationDate is no longer possible due to compression.
    # 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
    # Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
      $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
      $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
      $(AM_V_at)rm -f $@.tmp pdfmarks
      $(AM_V_at)mv $@.tmp2 $@
    libidn-components.png: $(srcdir)/components.dot
      $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
      $(AM_V_at)mv $@.tmp $@
    pdf-recursive: libidn-components.pdf
    dvi-recursive: libidn-components.eps
    ps-recursive: libidn-components.eps
    info-recursive: $(top_srcdir)/.version libidn-components.png

    Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

    Test reproducibility of tarball

    Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:

    0-compare:
      image: alpine:latest
      stage: repro
      needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
      script:
      - cd out
      - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
      - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
      - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
    # Confirm modern git-archive tarball reproducibility
      - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
      - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
    # Confirm old git-archive (export-subst but long git describe) tarball reproducibility
      - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
    # Confirm really old git-archive (no export-subst) tarball reproducibility
      - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
    # Confirm 'make dist' generated tarball reproducibility
      - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
      - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
      - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
      - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
      - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
      - cmp b-guix/*.tar.gz r-guix/*.tar.gz
    # Confirm 'make dist' from git-archive tarball reproducibility
      - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz

    Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:

    $ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
    368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
    368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
    59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
    59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
    5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
    5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
    7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
    7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
    8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
    8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
    8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
    8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
    8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
    acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
    cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
    cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
    f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
    f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz

    I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

    Worse Than FailureRepresentative Line: The Rounding Error

    At one point, someone noticed that some financial transactions weren't summing up correctly in the C# application Nancy supported. It didn't require Superman or a Peter Gibbons to figure out why: someone was using floating points for handling dollar amounts.

    That kicked off a big refactoring project to replace the usage of double types with decimal types. Everything seemed to go well, at least until there was a network hiccup and the application couldn't connect to the database. Let's see if you can figure out what happened:

    MessageBox.Show("Please decimal check the connection details. Also check firewall settings (port 1433) and network connectivity.");
    

    What a clbuttic mistake.

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsPragmatic

    Author: Julian Miles, Staff Writer “Walk with me.” The tall being turns away from Nohane, sweeping it’s cloak out of the way with a graceful, flowing move. Nohane sighs. These trivial, effortless competences are what betray the elder of elders no matter how they try to disguise themselves. It is as if of all the […]

    The post Pragmatic appeared first on 365tomorrows.

    xkcdSawStart

    Planet DebianArnaud Rebillout: Buid container images with buildah/podman in GitLab CI

    Oh no, it broke again!

    Today, this .gitlab-ci.yml file no longer works in GitLab CI:

    build-container-image:
      stage: build
      image: debian:testing
      before_script:
        - apt-get update
        - apt-get install -y buildah ca-certificates
      script:
        - buildah build -t $CI_REGISTRY_IMAGE .
    

    The command buildah build ... fails with this error message:

    STEP 2/3: RUN  apt-get update
    internal:0:0-0: Error: Could not process rule: No such file or directory
    internal:0:0-0: Error: Could not process rule: No such file or directory
    error running container: did not get container start message from parent: EOF
    Error: building at STEP "RUN apt-get update": setup network: netavark: nftables error: nft did not return successfully while applying ruleset
    

    After some investigation, it's caused by the recent upload of netavark 1.14.0-2. In this version, netavark switched from iptables to nftables as the default firewall driver. That doesn't really fly on GitLab Saas shared runners.

    For the complete background, refer to https://discussion.fedoraproject.org/t/125528. Note that the issue with GitLab was reported back in November, but at this point the conversation had died out.

    Fortunately, it's easy to workaround, we can tell netavark to keep using iptables via the environment variables NETAVARK_FW. The .gitlab-ci.yml file above becomes:

    build-container-image:
      stage: build
      image: debian:testing
      variables:
        # Cf. https://discussion.fedoraproject.org/t/125528/7
        NETAVARK_FW: iptables
      before_script:
        - apt-get update
        - apt-get install -y buildah ca-certificates
      script:
        - buildah build -t $CI_REGISTRY_IMAGE .
    

    And everything works again!

    If you're interested in this issue, feel free to fork https://gitlab.com/arnaudr/gitlab-build-container-image and try it by yourself.

    ,

    Planet DebianPeter Pentchev: Ringlet software updates (2025-03-23)

    Ringlet software updates (2025-03-23)

    Recent initial releases of [Ringlet software][r-site] (a fancy name for my pet projects):

    • [docker-scry][r-docker-scry] version [0.1.0][r-docker-scry-0.1.0] - examine Docker containers using host tools. Maybe the start of a set of tools that will allow system administrators to see what goes on in minimal containers that may not even have tools like ps or lsof installed.
    • [pshlex][r-pshlex] version [0.1.0][r-pshlex-0.1.0] - join various stringifiable objects and quote them for the shell. A trivial Python function that I've embedded in many of my projects over the years and I finally decided to release: a version of [shlex.join()][python-shlex-join] that also accepts [pathlib.Path][python-pathlib-path] objects.
    • [uvoxen][r-uvoxen] version [0.1.1][r-uvoxen-0.1.1] - generate test configuration files and run tests. A testing tool for Python projects that can either generate a [Tox configuration file][tox] or run the com...

    Cory DoctorowThere were always enshittifiers

    This week on my podcast, I read my latest Locus Magazine column, “There Were Always Enshittifiers,” about the historical context for my latest novel, Picks and Shovels:

    It used to be a much fairer fight. It used to be that if a com­pany figured out how to block copying its floppies, another company – or even just an individual tinkerer – could figure out how to break that “copy protection.” There were plenty of legitimate reasons to want to do this: Maybe you owned more than one computer, or maybe you were just worried that your floppy disk would degrade to the point of unread­ability. That’s a very reasonable fear: Floppies were notoriously unreliable, and every smart computer user learned to make frequent backups against the day that your computer presented you with the dread DISK ERROR message.

    In those early days, it was an arms race between companies that wanted to control how their customers used their own computers, and the technological guerrillas who produced the countermeasures that restored command over your computer to you, its owner. It’s true that the companies making the “copy protection” (in scare quotes because the way you protect your data is by making copies of it) typically had far more resources than the toolsmiths who were defending technology users.


    MP3

    365 TomorrowsStrong Coffee

    Author: Daniel Rogers “Victor, make coffee and display the weather.” I sank into my kitchen chair, scratching my messed-up mop of hair, wishing I’d gone to bed earlier. “You failed to obtain the recommended eight hours of sleep. It would be beneficial to have a cup of strong coffee.” “No, please. You know I don’t […]

    The post Strong Coffee appeared first on 365tomorrows.

    ,

    Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.7 on CRAN: New Generators, Many Updates

    ziggurats

    A new release 0.1.7 of RcppZiggurat is now on the CRAN network for R. This marks the first release in four and a half years.

    The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and others which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

    This release brings a number of changes. Notably, based on the work we did with the new package zigg (more on that in a second), we now also expose the Exponential generator, and the underlying Uniform generator. Otherwise many aspects of the package have been refreshed: updated builds, updated links, updated CI processes, more use of DOIs and more. The other big news is zigg which should now be the preference for deployment of Ziggurat due to its much lighter-weight and zero-dependency setup.

    The NEWS file entry below lists all changes.

    Changes in version 0.1.7 (2025-03-22)

    • The CI setup was updated to use run.sh from r-ci (Dirk).

    • The windows build was updated to GSL 2.7, and UCRT support was added (Jeroen in #16).

    • Manual pages now use JSS DOIs for references per CRAN request

    • README.md links and badges have been updated

    • Continuous integration actions have been updated several times

    • The DESCRIPTION file now uses Authors@R as mandated

    • Use of multiple cores is eased via a new helper function reflecting option mc.core or architecture defaults, used in tests

    • An inline function has been added to avoid a compiler nag

    • Support for exponential RNG draws zrexp has been added, the internal uniform generator is now also exposed via zruni

    • The vignette bibliography has been updated, and switched to DOIs

    • New package zigg is now mentioned in DESCRIPTION and vignette

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the Rcppziggurat page or the GitHub repository.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianLuke Faraone: I'm running for the OSI board... maybe

    The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

    In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

    Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

    I was dismayed when I received the following mail from Nick Vidal:

    Dear Luke,

    Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

    We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

    Best regards,
    OSI Election Teams

    Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

    The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

    I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

    I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

    Upd, N.B.: to people writing about this, I use they/them pronouns

    Planet DebianAntoine Beaupré: Losing the war for the free internet

    Warning: this is a long ramble I wrote after an outage of my home internet. You'll get your regular scheduled programming shortly.

    I didn't realize this until relatively recently, but we're at war.

    Fascists and capitalists are trying to take over the world, and it's bringing utter chaos.

    We're more numerous than them, of course: this is only a handful of people screwing everyone else over, but they've accumulated so much wealth and media control that it's getting really, really hard to move around.

    Everything is surveilled: people are carrying tracking and recording devices in their pockets at all time, or they drive around in surveillance machines. Payments are all turning digital. There's cameras everywhere, including in cars. Personal data leaks are so common people kind of assume their personal address, email address, and other personal information has already been leaked.

    The internet itself is collapsing: most people are using the network only as a channel to reach a "small" set of "hyperscalers": mind-boggingly large datacenters that don't really operate like the old internet. Once you reach the local endpoint, you're not on the internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp, Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all those things are not really the internet anymore.

    Those companies operate over the "internet" (as in the TCP/IP network), but they are not an "interconnected network" as much as their own, gigantic silos so much bigger than everything else that they essentially dictate how the network operates, regardless of standards. You access it over "the web" (as in "HTTP") but the fabric is not made of interconnected links that cross sites: all those sites are trying really hard to keep you captive on their platforms.

    Besides, you think you're writing an email to the state department, for example, but you're really writing to Microsoft Outlook. That app your university or border agency tells you to install, the backend is not hosted by those institutions, it's on Amazon. Heck, even Netflix is on Amazon.

    Meanwhile I've been operating my own mail server first under my bed (yes, really) and then in a cupboard or the basement for almost three decades now. And what for?

    So I can tell people I can? Maybe!

    I guess the reason I'm doing this is the same reason people are suddenly asking me about the (dead) mesh again. People are worried and scared that the world has been taken over, and they're right: we have gotten seriously screwed.

    It's the same reason I keep doing radio, minimally know how to grow food, ride a bike, build a shed, paddle a canoe, archive and document things, talk with people, host an assembly. Because, when push comes to shove, there's no one else who's going to do it for you, at least not the way that benefits the people.

    The Internet is one of humanity's greatest accomplishments. Obviously, oligarchs and fascists are trying to destroy it. I just didn't expect the tech bros to be flipping to that side so easily. I thought we were friends, but I guess we are, after all, enemies.

    That said, that old internet is still around. It's getting harder to host your own stuff at home, but it's not impossible. Mail is tricky because of reputation, but it's also tricky in the cloud (don't get fooled!), so it's not that much easier (or cheaper) there.

    So there's things you can do, if you're into tech.

    Share your wifi with your neighbours.

    Build a LAN. Throw a wire over to your neighbour too, it works better than wireless.

    Use Tor. Run a relay, a snowflake, a webtunnel.

    Host a web server. Build a site with a static site generator and throw it in the wind.

    Download and share torrents, and why not a tracker.

    Run an IRC server (or Matrix, if you want to federate and lose high availability).

    At least use Signal, not Whatsapp or Messenger.

    And yes, why not, run a mail server, join a mesh.

    Don't write new software, there's plenty of that around already.

    (Just kidding, you can write code, cypherpunk.)

    You can do many of those things just by setting up a FreedomBox.

    That is, after all, the internet: people doing their own thing for their own people.

    Otherwise, it's just like sitting in front of the television and watching the ads. Opium of the people, like the good old time.

    Let a billion droplets build the biggest multitude of clouds that will storm over this world and rip apart this fascist conspiracy.

    Disobey. Revolt. Build.

    We are more than them.

    Planet DebianAntoine Beaupré: Minor outage at Teksavvy business

    This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

    Mitigation

    Email

    The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

    Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

    I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

    Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

    It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

    Heck, it's not even deployed yet. But the hard part / grunt work is done.

    Other

    The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

    But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

    Side note on proper servics

    Typically, I tend to think of a properly functioning service as having four things:

    1. backups
    2. documentation
    3. monitoring
    4. automation
    5. high availability

    Yes, I miscounted. This is why you have high availability.

    Backups

    Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious joe can't get to.

    This is harder than you think.

    Documentation

    I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

    • disaster recovery (includes backups, probably)
    • playbook
    • install/upgrade procedures (see automation)

    You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

    Monitoring

    If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

    Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

    This is harder than you think.

    Automation

    Make it easy to redeploy the service elsewhere.

    Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

    This also means you can do unit tests on your configuration, otherwise you're building legacy.

    This is probably as hard as you think.

    High availability

    Make it not fail when one part goes down.

    Eliminate single points of failures.

    This is easier than you think, except for storage and DNS (which, I guess, means it's harder than you think too).

    Assessment

    In the above 5 items, I check two:

    1. backups
    2. documentation

    And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

    I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

    Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

    And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

    Resolution

    In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

    Timeline

    Times are in UTC-4.

    • 6:52: IRC bouncer goes offline
    • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
    • 9:54: outage apparently detected by TSI
    • 11:00: no response, tried calling back support again
    • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
    • 12:08: TPA monitoring notices service restored
    • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

    365 TomorrowsHoneycomb Dreams

    Author: Julie Zack “Starlight, Starbright, First star I see tonight, Wish I may, Wish I might, Have this wish, I wish tonight.” Enid loved when her older sister, Tracy, spoke the words at bedtime. “Do you remember the stars?” Enid asked. “I do,” Tracy said, looking somehow both happy and sad. Enid couldn’t understand the […]

    The post Honeycomb Dreams appeared first on 365tomorrows.

    ,

    Krebs on SecurityArrests in Tap-to-Pay Scheme Powered by Phishing

    Authorities in at least two U.S. states last week independently announced arrests of Chinese nationals accused of perpetrating a novel form of tap-to-pay fraud using mobile devices. Details released by authorities so far indicate the mobile wallets being used by the scammers were created through online phishing scams, and that the accused were relying on a custom Android app to relay tap-to-pay transactions from mobile devices located in China.

    Image: WLVT-8.

    Authorities in Knoxville, Tennessee last week said they arrested 11 Chinese nationals accused of buying tens of thousands of dollars worth of gift cards at local retailers with mobile wallets created through online phishing scams. The Knox County Sheriff’s office said the arrests are considered the first in the nation for a new type of tap-to-pay fraud.

    Responding to questions about what makes this scheme so remarkable, Knox County said that while it appears the fraudsters are simply buying gift cards, in fact they are using multiple transactions to purchase various gift cards and are plying their scam from state to state.

    “These offenders have been traveling nationwide, using stolen credit card information to purchase gift cards and launder funds,” Knox County Chief Deputy Bernie Lyon wrote. “During Monday’s operation, we recovered gift cards valued at over $23,000, all bought with unsuspecting victims’ information.”

    Asked for specifics about the mobile devices seized from the suspects, Lyon said “tap-to-pay fraud involves a group utilizing Android phones to conduct Apple Pay transactions utilizing stolen or compromised credit/debit card information,” [emphasis added].

    Lyon declined to offer additional specifics about the mechanics of the scam, citing an ongoing investigation.

    Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said there aren’t many valid use cases for Android phones to transmit Apple Pay transactions. That is, he said, unless they are running a custom Android app that KrebsOnSecurity wrote about last month as part of a deep dive into the operations of China-based phishing cartels that are breathing new life into the payment card fraud industry (a.k.a. “carding”).

    How are these China-based phishing groups obtaining stolen payment card data and then loading it onto Google and Apple phones? It all starts with phishing.

    If you own a mobile phone, the chances are excellent that at some point in the past two years it has received at least one phishing message that spoofs the U.S. Postal Service to supposedly collect some outstanding delivery fee, or an SMS that pretends to be a local toll road operator warning of a delinquent toll fee.

    These messages are being sent through sophisticated phishing kits sold by several cybercriminals based in mainland China. And they are not traditional SMS phishing or “smishing” messages, as they bypass the mobile networks entirely. Rather, the missives are sent through the Apple iMessage service and through RCS, the functionally equivalent technology on Google phones.

    People who enter their payment card data at one of these sites will be told their financial institution needs to verify the small transaction by sending a one-time passcode to the customer’s mobile device. In reality, that code will be sent by the victim’s financial institution in response to a request by the fraudsters to link the phished card data to a mobile wallet.

    If the victim then provides that one-time code, the phishers will link the card data to a new mobile wallet from Apple or Google, loading the wallet onto a mobile phone that the scammers control. These phones are then loaded with multiple stolen wallets (often between 5-10 per device) and sold in bulk to scammers on Telegram.

    An image from the Telegram channel for a popular Chinese smishing kit vendor shows 10 mobile phones for sale, each loaded with 5-7 digital wallets from different financial institutions.

    Merrill found that at least one of the Chinese phishing groups sells an Android app called “Z-NFC” that can relay a valid NFC transaction to anywhere in the world. The user simply waves their phone at a local payment terminal that accepts Apple or Google pay, and the app relays an NFC transaction over the Internet from a phone in China.

    “I would be shocked if this wasn’t the NFC relay app,” Merrill said, concerning the arrested suspects in Tennessee.

    Merrill said the Z-NFC software can work from anywhere in the world, and that one phishing gang offers the software for $500 a month.

    “It can relay both NFC enabled tap-to-pay as well as any digital wallet,” Merrill said. “They even have 24-hour support.”

    On March 16, the ABC affiliate in Sacramento (ABC10), Calif. aired a segment about two Chinese nationals who were arrested after using an app to run stolen credit cards at a local Target store. The news story quoted investigators saying the men were trying to buy gift cards using a mobile app that cycled through more than 80 stolen payment cards.

    ABC10 reported that while most of those transactions were declined, the suspects still made off with $1,400 worth of gift cards. After their arrests, both men reportedly admitted that they were being paid $250 a day to conduct the fraudulent transactions.

    Merrill said it’s not unusual for fraud groups to advertise this kind of work on social media networks, including TikTok.

    A CBS News story on the Sacramento arrests said one of the suspects tried to use 42 separate bank cards, but that 32 were declined. Even so, the man still was reportedly able to spend $855 in the transactions.

    Likewise, the suspect’s alleged accomplice tried 48 transactions on separate cards, finding success 11 times and spending $633, CBS reported.

    “It’s interesting that so many of the cards were declined,” Merrill said. “One reason this might be is that banks are getting better at detecting this type of fraud. The other could be that the cards were already used and so they were already flagged for fraud even before these guys had a chance to use them. So there could be some element of just sending these guys out to stores to see if it works, and if not they’re on their own.”

    Merrill’s investigation into the Telegram sales channels for these China-based phishing gangs shows their phishing sites are actively manned by fraudsters who sit in front of giant racks of Apple and Google phones that are used to send the spam and respond to replies in real time.

    In other words, the phishing websites are powered by real human operators as long as new messages are being sent. Merrill said the criminals appear to send only a few dozen messages at a time, likely because completing the scam takes manual work by the human operators in China. After all, most one-time codes used for mobile wallet provisioning are generally only good for a few minutes before they expire.

    For more on how these China-based mobile phishing groups operate, check out How Phished Data Turns Into Apple and Google Wallets.

    The ashtray says: You’ve been phishing all night.

    Cryptogram Friday Squid Blogging: A New Explanation of Squid Camouflage

    New research:

    An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Cryptogram My Writings Are in the LibGen AI Training Corpus

    The Atlantic has a search tool that allows you to search for specific works in the “LibGen” database of copyrighted works that Meta used to train its AI models. (The rest of the article is behind a paywall, but not the search tool.)

    It’s impossible to know exactly which parts of LibGen Meta used to train its AI, and which parts it might have decided to exclude; this snapshot was taken in January 2025, after Meta is known to have accessed the database, so some titles here would not have been available to download.

    Still…interesting.

    Searching my name yields 199 results: all of my books in different versions, plus a bunch of shorter items.

    Cryptogram NCSC Releases Post-Quantum Cryptography Timeline

    The UK’s National Computer Security Center (part of GCHQ) released a timeline—also see their blog post—for migration to quantum-computer-resistant cryptography.

    It even made The Guardian.

    Planet DebianJamie McClelland: AI's Actual Impact

    Two years after OpenAI launched ChatGPT 3.5, humanity is not on the cusp of extinction and Elon Musk seems more responsible for job loss than any AI agent.

    However, ask any web administrator and you will learn that large language models are having a significant impact on the world wide web (or, for a less technical account, see Forbes articles on bots). At May First, a membership organization that has been supporting thousands of web site for over 20 years, we have never seen anything like this before.

    It started in 2023. Web sites that performed quite well with a steady viewership started having traffic spikes. These were relatively easy to diagnose, since most of the spikes came from visitors that properly identified themselves as bots, allowing us to see that the big players - OpenAI, Bing, Google, Facebook - were increasing their efforts to scrape as much content from web sites as possible.

    Small brochure sites were mostly unaffected because they could be scraped in a matter of minutes. But large sites with an archive of high quality human written content were getting hammered. Any web site with a search feature or a calendar or any interface that generated exponential hits that could be followed were particularly vulnerable.

    But hey, that’s what robots.txt is for, right? To tell robots to back off if you don’t want them scraping your site?

    Eventually, the cracks began to show. Bots were ignoring robots.txt (did they ever pay that much attention to it in the first place?). Furthermore, rate limiting requests by user agent also began to fail. When you post a link on Facebook, a bot identifying itself as “facebooketernalhit” is invoked to preview the page so it can show a picture and other meta data. We don’t want to rate limit that bot, right? Except, Facebook is also using this bot to scrape your site, often bringing your site to its knees. And don’t get me started on TwitterBot.

    Eventually, it became clear that the majority of the armies of bots scraping our sites have completely given up on identifying themselves as bots and are instead using user agents indistinguishable from regular browsers. By using thousands of different IP addresses, it has become really hard to separate the real humans from the bots.

    Now what?

    So, no, unfortunately, your web site is not suddenly getting really popular. And, you are blessed with a whole new set of strategic decisions.

    Fortunately, May First has undergone a major infrastructure transition, resulting in centralized logging of all web sites and a fleet of web proxy servers that intercept all web traffic. Centralized logging means we can analyze traffic and identify bots more easily, and a web proxy fleet allows us to more easily implement rules across all web sites.

    However, even with all of our latest changes and hours upon hours of work to keep out the bots, our members are facing some hard decisions about maintaining an open web.

    One member of May First provides Google translations of their web site to every language available. But wow, that is now a disaster because instead of having every bot under the sun scrapping all 843 (a made up number) pieces of unique content on their site, the same bots are scraping 843 * (number of available languages) pieces of content on their site. Should they stop providing this translation service in order to ensure people can access their site in the site’s primary language?

    Should web sites turn off their search features that include drop down options of categories to prevent bots from systematically refreshing the search page with every possible combination of search terms?

    Do we need to alter our calendar software to avoid providing endless links into the future (ok, that is an easy one)?

    What’s next?

    Something has to change.

    • Lock down web 2.0. Web 2.0 brought us wonderful dynamic web sites, which Drupal and WordPress and many other pieces of amazing software have supported for over a decade. This is the software that is getting bogged down by bots. Maybe we need to figure out a way to lock down the dynamic aspects of this software to logged in users and provide static content for everyone else?

    • Paywalls and accounts everywhere. There’s always been an amazing non-financial reward to providing a web site with high quality movement oriented content for free. It populates the search engines, provides links to inspiring and useful content in moments of crises, and can galvanize movements. But these moments of triumph happen between long periods of hard labor that now seems to mostly feed capitalist AI scumbags. If we add a new set of expenses and labor to keep the sites running for this purpose, how sustainable is that? Will our treasure of free movement content have to move behind paywalls or logins? If we provide logins, will that keep the bots out or just create a small hurdle for them to automate the account creation process? What happens when we can’t search for this kind of content via search engines?

    • Cutting deals. What if our movement content providers are forced to cut deals with the AI entrepreneurs to allow the paying scumbags to fund the content creation. Eww. Enough said.

    • Bot detection. Maybe we just need to get better at bot detection? This will surely be an arms race, but would have some good benefits. Bots have also been filling out our forms and populating our databases with spam, testing credit cards against our donation pages, conducting denial of service attacks and all kinds of other irritating acts of vandalism. If we were better at stopping bots automatically it would have a lot of benefits. But what impact would it have on our web sites and the experience of using them? What about “good” bots (RSS feed readers, payment processors, web hooks, uptime detectors)? Will we cut the legs off any developer trying to automate something?

    I’m not really sure where this is going, but it seems that the world wide web is about to head in a new direction.

    Worse Than FailureError'd: NaN is the Loneliest Number

    Today we have a whole batch of category errors, picked out from the rash of submissions and a few that have been festering on the shelf. Just for fun, I threw in an ironic off-by-some meta-error. See if you can spot it.

    Adam R. "I'm looking for hotel rooms for the 2026 Winter Olympics in Milan-Cortina. Most hotels haven't opened up reservations yet, except for ridiculously overprice hospitality packages. This search query found NaN facilities available, which equates to one very expensive apartment. I guess one is not a number now?"

    0

     

    Intrepid traveler BJH had a tough time at the Intercontinental. I almost feel sympathy. Almost. "I stare at nulls at home all the time so it made me feel comfortable to see them at the hotel when traveling. And what is that 'INTERCONTINENTAL W...' at the top? I may never know!"

    1

     

    Hoping to find out, BJ clicked through the mystery menu and discovered... this. But even worse, "There was no exit: Clicking Exit did nothing and neither did any of the buttons on the remote. Since I'd received more entertainment than usual from a hotel screen I just turned everything off."

    6

     

    Striking out for some streaming entertainment Dmitry NoLastName was silently stymied by this double-decker from Frontier.com.

    3

     

    No streaming needed for Barry M. who can get a full dose of fun from those legacy broadcast channels! Whatever did they do before null null undefined null? "Hey, want to watch TV tonight? NaN."

    2

     

    Hah! "That's MISTER Null, to you," declared an anonymous contributor.

    4

     

    And finally, another entirely different anonymous contributor clarified that there are apparently NaN cellphone towers in Switzerland. Personally, I'm intrigued by the existence of that one little crumb of English on an otherwise entirely German page.

    5

     

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    365 TomorrowsThe Burgeoning Silence

    Author: Colin Jeffrey Sara was sure she had looked away for only a moment. That was all it took. Sam had vanished from the playground. Clouds gathered heavily in the sky as panic gripped her throat. She yelled his name, over and again, her cries buffered by the indifferent wind. Soon other parents helped search, […]

    The post The Burgeoning Silence appeared first on 365tomorrows.

    Planet DebianReproducible Builds (diffoscope): diffoscope 291 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 291. This version includes the following changes:

    [ Chris Lamb ]
    * Make two required adjustments for the new version of the src:file package:
      - file(1) version 5.46 now emits "XHTML document" for .xhtml files, such as
        those found nested within our .epub tests. Therefore, match this string
        when detecting XML files. This was causing an FTBFS due to inconsistent
        indentation in diffoscope's output.
      - Require the new, upcoming, version of file(1) for a quine-related
        testcase after adjusting the expected output. Previous versions of
        file(1) had a duplicated "last modified, last modified" string for some
        Zip archives that has now been removed.
    * Add a missing subprocess import.
    * Bump Standards-Version to 4.7.2.
    

    You find out more by visiting the project homepage.

    Planet DebianReproducible Builds (diffoscope): diffoscope 290 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 290. This version includes the following changes:

    [ Chris Lamb ]
    * Also consider .aar files as APK files for the sake of not falling back to a
      binary diff. (Closes: #1099632)
    * Ensure all calls to out_check_output in the ELF comparator have the
      potential CalledProcessError exception caught. (Re: #398)
    * Ensure a potential CalledProcessError is caught in the OpenSSL comparator
      as well.
    * Update copyright years.
    
    [ Eli Schwartz ]
    * Drop deprecated and no longer functional "setup.py test" command.
    

    You find out more by visiting the project homepage.

    ,

    Planet DebianC.J. Collier: Installing a desktop environment on the HP Omen

    `dmidecode | grep -A8 ‘^System Information’`

    tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx

    I’m provisioning a new piece of hardware for my eng consultant and it’s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd’d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition’s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted.

    On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it’s unable to do more than create a black screen on vt1. It’s easy enough to switch to vt2 and get a shell on the installed system. So I’m doing that and investigating what’s changed in Trixie. It seems like it’s pretty significant. Did they just throw out Keith Packard’s and Behdad Esfahbod’s work on font rendering? I don’t understand what’s happening in this effort to abstract to a simpler interface. I’ll probably end up reading more about it.

    In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop’s SoC. Some commands I used to clear these packages and re-install connamon follow:

    ```
    dpkg -S /etc/X11
    dpkg -S /usr/lib/firmware
    apt-get purge $(dpkg -l | grep -i \
      -e gnome -e gtk -e x11-common -e xfonts- -e libvdpau -e dbus-user-session -e gpg-agent \
      -e bluez -e colord -e cups -e fonts -e drm -e xf86 -e mesa -e nouveau -e cinnamon \
      -e avahi -e gdk -e pixel -e desktop -e libreoffice -e x11 -e wayland -e xorg \
      -e firmware-nvidia-graphics -e firmware-amd-graphics -e firmware-mediatek -e firmware-realtek \
      | awk '{print $2}')
    apt-get autoremove
    apt-get purge $(dpkg -l | grep '^r' | awk '{print $2}')
    tasksel install cinnamon-desktop
    ```

    And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure.

    I’m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I’m pretty good at performing maintenance operation on them at this point.

    Cryptogram Critical GitHub Attack

    This is serious:

    A sophisticated cascading supply chain attack has compromised multiple GitHub Actions, exposing critical CI/CD secrets across tens of thousands of repositories. The attack, which originally targeted the widely used “tj-actions/changed-files” utility, is now believed to have originated from an earlier breach of the “reviewdog/action-setup@v1” GitHub Action, according to a report.

    […]

    CISA confirmed the vulnerability has been patched in version 46.0.1.

    Given that the utility is used by more than 23,000 GitHub repositories, the scale of potential impact has raised significant alarm throughout the developer community.

    Planet DebianSven Hoexter: Purpose A Wellbeing Economies Film

    The film is centered around the idea of establishing an alternative to the GDP as the metric to measure success of a country/society. The film follows mostly Katherine Trebeck on her journey of convincing countries to look beyond the GDP. I very much enjoyed watching this documentary to get a first impression of the idea itself and the effort involved. I had the chance to watch the german version of it online. But there is now another virtual screening offered by the Permacultur Film Club on the 29th and 30th of March 2025. This screening is on a pay-as-you-like-and-can basis and includes a Q&A session with Kathrine Trebeck.

    Trailer 1 and Trailer 2 are available on Youtube if you like to get a first impression.

    Planet DebianSven Hoexter: k8s deployment build-in preStop sleep

    Seems in the k8s world there are sufficient enough race conditions in shutting down pods and removing those from endpoint slices in time. Thus people started to do all kind of workarounds like adding a statically linked sleep binary to otherwise "distroless" and rather empty OCI images to just run a sleep command on shutdown before really shutting down. Or even base64 encoding the sleep binary and shipping it via configMap. Or whatever else. Eventually the situation was so severe that upstream decided to implement a sleep feature in the deployment resource directly.

    In short it looks like this:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: foo
    spec:
      template:
        spec:
          lifecycle:
            preStop:
              sleep:
                seconds: 10
    

    Maybe highlighting that "feature" helps some more people to get rid of their own preStop sleep commands and make some deployments a tiny bit simpler.

    Worse Than FailureOver Extended Methods

    Jenny had been perfectly happy working on a series of projects for her company, before someone said, "Hey, we need you to build a desktop GUI for an existing API."

    The request wasn't the problem, per se. The API, on the other hand, absolutely was.

    The application Jenny was working on represented a billing contract for materials consumed at a factory. Essentially, the factory built a bunch of individual parts, and then assembled them into a finished product. They only counted the finished product, but needed to itemize the billing down to not only the raw materials that went into the finished product, the intermediate parts, but also the toilet paper put in the bathrooms. All the costs of operating the factory were derived from the units shipped out.

    This meant that the contract itself was a fairly complicated tree structure. Jenny's application was meant to both visualize and allow users to edit that tree structure to update the billing contract in sane, and predictable ways, so that it could be reviewed and approved and when the costs of toilet paper went up, those costs could be accurately passed on to the customer.

    Now, all the contract management was already implemented and lived library that itself called back into a database. Jenny just needed to wire it up to a desktop UI. Part of the requirements were that line items in the tree needed to have a special icon displayed next to them under two conditions: if one of their ancestors in the tree had been changed since the last released contract, or if they child was marked as "inherit from parent".

    The wrapper library wasn't documented, so Jenny asked the obvious question: "What's the interface for this?"

    The library team replied with this:

    IModelInheritFromParent : INotifyPropertyChanged
    {
            bool InheritFromParent {get; set;}
    }
    

    "That covers the inheritance field," Jenny said, "but that doesn't tell me if the ancestor has been modified."

    "Oh, don't worry," the devs replied, "there's an extension method for that."

    public bool GetChangedIndicator(this IModelTypeA);
    

    Extension methods in C# are just a way to use syntactic sugar to "add" methods to a class: IModelTypeA does not have a GetChangedIndicator method, but because of the this keyword, it's an extension method and we can now invoke aInstance.GetChangedIndicator(). It's how many built-in .Net APIs work, but like most forms of syntactic sugar, while it can be good, it usually makes code harder to understand, harder to test, and harder to debug.

    But Jenny's main complaint was this: "You can't raise an event or something? I'm going to need to poll?"

    "Yes, you're going to need to poll."

    Jenny didn't like the idea of polling the (slow) database, so at first, she tried to run the polling in a background thread so it wouldn't block the UI. Unfortunately for her, the library was very much not threadsafe, so that blew up. She ended up needing to poll on the main UI thread, which meant the application would frequently stall while users were working. She did her best to minimize it, but it was impossible to eliminate.

    But worse than that, each contract item may implement one of four interfaces, which meant there were four versions of the extension method:

    public bool GetChangedIndicator(this IModelTypeA);
    public bool GetChangedIndicator(this IModelTypeB);
    public bool GetChangedIndicator(this IModelTypeC);
    public bool GetChangedIndicator(this IModelTypeD);
    

    To "properly" perform the check, Jenny would have to check which casts were valid for a given item, cast it, and then invoke GetChangedIndicator. It's worth noting that had they just used regular inheritance instead of extension methods, this wouldn't have been necessary at all. Using the "fun" syntactic sugar made the code more complicated for no benefit.

    This left Jenny with another question: "What if an item implements more than one of these interfaces? What if the extension methods disagree on if the item is changed?"

    "Good question," the team responsible for the library replied. "That should almost never happen."

    Jenny quit not long after this.

    [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

    365 TomorrowsThe Race

    Author: Jo Gatenby Lara hauled on her dust demon’s reins, desperate to keep the stupid creature on the coaster track and in the race. Desari’s wyrm, Dynamo, surged past them, scalding her with desert sand that slipped under her face mask, choking her. With kicks and shouts, she urged Sandfire forward, but it was too […]

    The post The Race appeared first on 365tomorrows.

    David BrinZelensky, here's your ceasefire judo move

    I keep offering 'judo moves' for those we depend-upon to save civilization. 

    One such move might have let Joe Biden demolish the entire Kremlin kompromat ring in DC. (Three Republican Congressional reps have attested that their GOP colleagues hold "orgies" and one of them made the blackmail charge explicit.) Alas, that idea - along with every other agile tactic in Polemical Judo - was never taken up.

    Okay, let's try another one. An impudent proposal - or else a potential magic move for the Faramir of the West, desperately holding the line against the modern Mordor. 

    I refer to Ukraine's President Volodymyr Zelensky, who faces an excruciating dilemma because the the U.S. election.

    Whether or not you go along with my longtime assertion of Kremlin-strings puppetting Donald Trump - (and recent polls show that a majority of Americans now see those strings) - the goal of Trumpian 'diplomacy' has clearly been to save Vlad Putin's hash, as his refineries burn, as his bedraggled soldiery mutinously grumbles and as Europe burgeons its formerly-flacid military might to formidability, in response to Muscovite aggression. 

    Almost every aspect of the current, proposed 'ceasefire' would benefit Putin and disadvantage Ukraine. But Zelensky cannot be seen 'obstructing' a truce, or Ukraine will suffer in the arena of world opinion - and some of his own suffering populace. Hence my modest proposal.

    To be clear... if Zelensky were to present this concept to the world, there is no way on Earth that Putin or Trump would accept it! 

    And yet, it will appear so clearly and inherently reasonable that those two would have a hard time justifying their rejection.


    == A Modest Proposal ==

    President Zelensky, just say this:

    "Our land, our cities and towns, our forests and fields have been blasted, poisoned, and strewn with millions of landmines, while our brave men and women suffer every day resisting the aggressors, both on the front lines and on the home front. Meanwhile, every day and everywhere, our skills and our power-to-resist grow. 

    "The aggressor has no long term plan. Even should he occupy all of our country, the heat of our rage and resistance would make Kabul in 1980 pale, by comparison. Occupied, but never-subjugated, Ukrainians will turn the next hundred years into hell for the invaders. 

    "Then why does this outrageous crime continue? Because the despotic Kremlin regime controls all media and news sources available to 135 million Russians, who have no way to know the following:

    -- That there was no "Nazi" movement in Ukraine. Except for a few dozen gangster idiots sporting swastikas for shock value. (There are far more such fools in both the USA and in the reborn USSR!) Otherwise, there was never any evidence for anything like it.

    -- There was no ' invader army building in Ukraine' before either the full scale Russian invasion of 2022 or the 'green man' putsches of 2014. That was pure fantasy and we can prove it.

    -- There was no irresistable momentum for Ukrainian NATO membership before 2022. Up to then, NATO itself had been atrophying for years and Ukraine might have been kept out of the alliance through diplomacy. Now? NATO is growing and gets stronger in potency every day. And Ukraine's alliance with the rest of Europe is now absolutely guaranteed, thanks to Mr. Putin.

    -- We were always willing to negotiate assurance and provisions for the comfort and security and prosperity of Russian speaking citizens of Ukraine, especially in the Donbas. That offer still stands. And hence we ask them, are you truly better off now, dear friends?

    I could go on and on. But doing so just leaves this as another case of "he-said, she said." And we are done exchanging heated assertions.

    It is time to check out which side is lying!

    OUR PROPOSAL IS SIMPLE:

    Before any ceasefire takes effect, we call for a Grand Commission to investigate the truth. 

    Instead of world elites or politicians, we propose that this commission consist mostly of:

    -- 100 Russian citizens...

    -- 100 Ukrainian citizens living in unoccupied territories...

    -- 100 citizens from a random pool of other nations.

    Members of this Grand Commission will be free to go anywhere they wish, in both Ukraine and Russia, wielding cameras and asking questions and demanding answers. Let them video the trashed and ruined towns and farms and forests everywhere that Russian armed forces claim to have 'liberated.'

    And yes, the commissioners will be welcome to sift for evidence of pervasive "naziism" in our country... so long as they are also free to document Vlad Putin's oligarchy and their relentlessly violent drive to rebuild the Orwellian Soviet empire.


    == How would it work? A FAQ ==


    Why such a large group? 

    A large and diverse commission helps to ensure maximum coverage and to minimize coercion by their home governments. Among so many commissioners, some will certainly ask pointed questions! And they will return home in numbers that cannot be kept squelched or repressed.

    How to ensure the members aren't just factotums of either regime?  

    They will be selected randomly from the paper pages of old telephone directories! Sure, that might be clumsy. But those paper volumes cannot be meddled with, especially old phone books currently archived in - say - Switzerland. For all its faults, this method ensures that the selection process will pick many citizens who are not beholden to the masters in the capital.

    Won't the governments of Russia or Ukraine be tempted to coerce commissioners anyway?  

    Yes, and that  is why they will be invited to bring along their immediate families! Arrangements will be made for spouses and children to stay at nice resorts along the Black Sea, while the commissioners do their work. And incidentally, those families will talk to each other, too. We welcome that. Do you?

    Won't some of the commissioners defect - during their tours of Ukraine or Russia, refusing to go home?  

    Of course some will! We are not afraid of that. Are YOU afraid of that, Mr. Putin?

    Won't such a huge endeavor be expensive?  

    Sure it will be. So? Russia and America brag about how rich they are. And so do the host nations of recent 'peace conferences,' who could easily pony up the expenses, for the sake of ending a dangerous and destabilizing war.

    Why not ask the world's billionnaire caste to foot the bill? For the sake of actual communications and genuine peace and prosperity? 

    There are many of the uber-rich who talk a good game. This would be their chance to prove that they believe in the future, after all.  

    Won't there be dangers? 

    Sure they will be, especially wherever resumed fighting breaks out near the inspectors, during or after a ceasefire. The commissioners should possess some grit and courage and world civic-mindedness. Why? Is that a problem? Compared to the possible good outcomes from such heroism? From the most-genuine kind of patriotism?

    Do you honestly expect Vladimir Putin to agree to this? 

    Why wouldn't he? If this will let him convince skeptics around the world and in Ukraine that all of his justifications for this slaughter and ruination of a beautiful country are valid and true? 

    Of course that was a bitter jest. Because there is no way that Mr. Putin or his supporters would agree to such a Grand Commission, digging deeply into things that are called Facts and Truth. 

    Then why did you make the proposal, if you don't think it will be accepted? 

    Because nothing better demonstrates the most basic difference between the two sides of this conflict. 

    One side slavishly follows a murderous liar, because of the hypnotic power of his lies. Lies that would be so-easily disproved, if the tyrant agreed to allow light to flow to his people.

    The other side is a nation of people who love Russian poetry... but we are not and never have been Russian. People who know intimately well Russia's cruelly-depressing history, and who want no further part of it.

    We all had friends across the former USSR... but Ukraine is not and never has been Russia. 

    And we can prove it, as we daily prove our utter determination never again to suffer under Moscow's boot heel.

    Are we outnumbered? Certainly. But we have special regiments on our side. 

    Honor. Decency. Resolve. Democracy. The friendship of all free peoples around the World. And science, too.

    But above all -- the Truth.


    Krebs on SecurityDOGE to Fired CISA Staff: Email Us Your Personal Data

    A message posted on Monday to the homepage of the U.S. Cybersecurity & Infrastructure Security Agency (CISA) is the latest exhibit in the Trump administration’s continued disregard for basic cybersecurity protections. The message instructed recently-fired CISA employees to get in touch so they can be rehired and then immediately placed on leave, asking employees to send their Social Security number or date of birth in a password-protected email attachment — presumably with the password needed to view the file included in the body of the email.

    The homepage of cisa.gov as it appeared on Monday and Tuesday afternoon.

    On March 13, a Maryland district court judge ordered the Trump administration to reinstate more than 130 probationary CISA employees who were fired last month. On Monday, the administration announced that those dismissed employees would be reinstated but placed on paid administrative leave. They are among nearly 25,000 fired federal workers who are in the process of being rehired.

    A notice covering the CISA homepage said the administration is making every effort to contact those who were unlawfully fired in mid-February.

    “Please provide a password protected attachment that provides your full name, your dates of employment (including date of termination), and one other identifying factor such as date of birth or social security number,” the message reads. “Please, to the extent that it is available, attach any termination notice.”

    The message didn’t specify how affected CISA employees should share the password for any attached files, so the implicit expectation is that employees should just include the plaintext password in their message.

    Email is about as secure as a postcard sent through the mail, because anyone who manages to intercept the missive anywhere along its path of delivery can likely read it. In security terms, that’s the equivalent of encrypting sensitive data while also attaching the secret key needed to view the information.

    What’s more, a great many antivirus and security scanners have trouble inspecting password-protected files, meaning the administration’s instructions are likely to increase the risk that malware submitted by cybercriminals could be accepted and opened by U.S. government employees.

    The message in the screenshot above was removed from the CISA homepage Tuesday evening and replaced with a much shorter notice directing former CISA employees to contact a specific email address. But a slightly different version of the same message originally posted to CISA’s website still exists at the website for the U.S. Citizenship and Immigration Services, which likewise instructs those fired employees who wish to be rehired and put on leave to send a password-protected email attachment with sensitive personal data.

    A message from the White House to fired federal employees at the U.S. Citizenship and Immigration Services instructs recipients to email personal information in a password-protected attachment.

    This is hardly the first example of the administration discarding Security 101 practices in the name of expediency. Last month, the Central Intelligence Agency (CIA) sent an unencrypted email to the White House with the first names and first letter of the last names of recently hired CIA officers who might be easy to fire.

    As cybersecurity journalist Shane Harris noted in The Atlantic, even those fragments of information could be useful to foreign spies.

    “Over the weekend, a former senior CIA official showed me the steps by which a foreign adversary who knew only his first name and last initial could have managed to identify him from the single line of the congressional record where his full name was published more than 20 years ago, when he became a member of the Foreign Service,” Harris wrote. “The former official was undercover at the time as a State Department employee. If a foreign government had known even part of his name from a list of confirmed CIA officers, his cover would have been blown.”

    The White House has also fired at least 100 intelligence staffers from the National Security Agency (NSA), reportedly for using an internal NSA chat tool to discuss their personal lives and politics. Testifying before the House Select Committee on the Communist Party earlier this month, the NSA’s former top cybersecurity official said the Trump administration’s attempts to mass fire probationary federal employees will be “devastating” to U.S. cybersecurity operations.

    Rob Joyce, who spent 34 years at the NSA, told Congress how important those employees are in sustaining an aggressive stance against China in cyberspace.

    “At my former agency, remarkable technical talent was recruited into developmental programs that provided intensive unique training and hands-on experience to cultivate vital skills,” Joyce told the panel. “Eliminating probationary employees will destroy a pipeline of top talent responsible for hunting and eradicating [Chinese] threats.”

    Both the message to fired CISA workers and DOGE’s ongoing efforts to bypass vetted government networks for a faster Wi-Fi signal are emblematic of this administration’s overall approach to even basic security measures: To go around them, or just pretend they don’t exist for a good reason.

    On Monday, The New York Times reported that U.S. Secret Service agents at the White House were briefly on alert last month when a trusted captain of Elon Musk’s “Department of Government Efficiency” (DOGE) visited the roof of the Eisenhower building inside the White House compound — to see about setting up a dish to receive satellite Internet access directly from Musk’s Starlink service.

    The White House press secretary told The Times that Starlink had “donated” the service and that the gift had been vetted by the lawyer overseeing ethics issues in the White House Counsel’s Office. The White House claims the service is necessary because its wireless network is too slow.

    Jake Williams, vice president for research and development at the cybersecurity consulting firm Hunter Strategy, told The Times “it’s super rare” to install Starlink or another internet provider as a replacement for existing government infrastructure that has been vetted and secured.

    “I can’t think of a time that I have heard of that,” Williams said. “It introduces another attack point,” Williams said. “But why introduce that risk?”

    Meanwhile, NBC News reported on March 7 that Starlink is expanding its footprint across the federal government.

    “Multiple federal agencies are exploring the idea of adopting SpaceX’s Starlink for internet access — and at least one agency, the General Services Administration (GSA), has done so at the request of Musk’s staff, according to someone who worked at the GSA last month and is familiar with its network operations — despite a vow by Musk and Trump to slash the overall federal budget,” NBC wrote.

    The longtime Musk employee who encountered the Secret Service on the roof in the White House complex was Christopher Stanley, the 33-year-old senior director for security engineering at X and principal security engineer at SpaceX.

    On Monday, Bloomberg broke the news that Stanley had been tapped for a seat on the board of directors at the mortgage giant Fannie Mae. Stanley was added to the board alongside newly confirmed Federal Housing Finance Agency director Bill Pulte, the grandson of the late housing businessman and founder of PulteGroup — William J. Pulte.

    In a nod to his new board role atop an agency that helps drive the nation’s $12 trillion mortgage market, Stanley retweeted a Bloomberg story about the hire with a smiley emoji and the comment “Tech Support.”

    But earlier today, Bloomberg reported that Stanley had abruptly resigned from the Fannie board, and that details about the reason for his quick departure weren’t immediately clear. As first reported here last month, Stanley had a brush with celebrity on Twitter in 2015 when he leaked the user database for the DDoS-for-hire service LizardStresser, and soon faced threats of physical violence against his family.

    My 2015 story on that leak did not name Stanley, but he exposed himself as the source by posting a video about it on his Youtube channel. A review of domain names registered by Stanley shows he went by the nickname “enKrypt,” and was the former owner of a pirated software and hacking forum called error33[.]net, as well as theC0re, a video game cheating community.

    Stanley is one of more than 50 DOGE workers, mostly young men and women who have worked with one or more of Musk’s companies. The Trump administration remains dogged by questions about how many — if any — of the DOGE workers were put through the gauntlet of a thorough security background investigation before being given access to such sensitive government databases.

    That’s largely because in one of his first executive actions after being sworn in for a second term on Jan. 20, President Trump declared that the security clearance process was simply too onerous and time-consuming, and that anyone so designated by the White House counsel would have full top secret/sensitive compartmented information (TS/SCI) clearances for up to six months. Translation: We accepted the risk, so TAH-DAH! No risk!

    Presumably, this is the same counsel who saw no ethical concerns with Musk “donating” Starlink to the White House, or with President Trump summoning the media to film him hawking Cybertrucks and Teslas (a.k.a. “Teslers”) on the White House lawn last week.

    Mr. Musk’s unelected role as head of an ad hoc executive entity that is gleefully firing federal workers and feeding federal agencies into “the wood chipper” has seen his Tesla stock price plunge in recent weeks, while firebombings and other vandalism attacks on property carrying the Tesla logo are cropping up across the U.S. and overseas and driving down Tesla sales.

    President Trump and his attorney general Pam Bondi have dubiously asserted that those responsible for attacks on Tesla dealerships are committing “domestic terrorism,” and that vandals will be prosecuted accordingly. But it’s not clear this administration would recognize a real domestic security threat if it was ensconced squarely behind the Resolute Desk.

    Or at the pinnacle of the Federal Bureau of Investigation (FBI). The Washington Post reported last month that Trump’s new FBI director Kash Patel was paid $25,000 last year by a film company owned by a dual U.S. Russian citizen that has made programs promoting “deep state” conspiracy theories pushed by the Kremlin.

    “The resulting six-part documentary appeared on Tucker Carlson’s online network, itself a reliable conduit for Kremlin propaganda,” The Post reported. “In the film, Patel made his now infamous pledge to shut down the FBI’s headquarters in Washington and ‘open it up as a museum to the deep state.'”

    When the head of the FBI is promising to turn his own agency headquarters into a mocking public exhibit on the U.S. National Mall, it may seem silly to fuss over the White House’s clumsy and insulting instructions to former employees they unlawfully fired.

    Indeed, one consistent feedback I’ve heard from a subset of readers here is something to this effect: “I used to like reading your stuff more when you weren’t writing about politics all the time.”

    My response to that is: “Yeah, me too.” It’s not that I’m suddenly interested in writing about political matters; it’s that various actions by this administration keep intruding on my areas of coverage.

    A less charitable interpretation of that reader comment is that anyone still giving such feedback is either dangerously uninformed, being disingenuous, or just doesn’t want to keep being reminded that they’re on the side of the villains, despite all the evidence showing it.

    Article II of the U.S. Constitution unambiguously states that the president shall take care that the laws be faithfully executed. But almost from Day One of his second term, Mr. Trump has been acting in violation of his sworn duty as president by choosing not to enforce laws passed by Congress (TikTok ban, anyone?), by freezing funds already allocated by Congress, and most recently by flouting a federal court order while simultaneously calling for the impeachment of the judge who issued it. Sworn to uphold, protect and defend The Constitution, President Trump appears to be creating new constitutional challenges with almost each passing day.

    When Mr. Trump was voted out of office in November 2020, he turned to baseless claims of widespread “election fraud” to explain his loss — with deadly and long-lasting consequences. This time around, the rallying cry of DOGE and White House is “government fraud,” which gives the administration a certain amount of cover for its actions among a base of voters that has long sought to shrink the size and cost of government.

    In reality, “government fraud” has become a term of derision and public scorn applied to anything or anyone the current administration doesn’t like. If DOGE and the White House were truly interested in trimming government waste, fraud and abuse, they could scarcely do better than consult the inspectors general fighting it at various federal agencies.

    After all, the inspectors general likely know exactly where a great deal of the federal government’s fiscal skeletons are buried. Instead, Mr. Trump fired at least 17 inspectors general, leaving the government without critical oversight of agency activities. That action is unlikely to stem government fraud; if anything, it will only encourage such activity.

    As Techdirt founder Mike Masnick noted in a recent column “Why Techdirt is Now a Democracy Blog (Whether We Like it or Not),” when the very institutions that made American innovation possible are being systematically dismantled, it’s not a “political” story anymore: It’s a story about whether the environment that enabled all the other stories we cover will continue to exist.

    “This is why tech journalism’s perspective is so crucial right now,” Masnick wrote. “We’ve spent decades documenting how technology and entrepreneurship can either strengthen or undermine democratic institutions. We understand the dangers of concentrated power in the digital age. And we’ve watched in real-time as tech leaders who once championed innovation and openness now actively work to consolidate control and dismantle the very systems that enabled their success.”

    “But right now, the story that matters most is how the dismantling of American institutions threatens everything else we cover,” Masnick continued. “When the fundamental structures that enable innovation, protect civil liberties, and foster open dialogue are under attack, every other tech policy story becomes secondary.”

    ,

    Worse Than FailureCodeSOD: Reliability Test

    Once upon a time, Ryan's company didn't use a modern logging framework to alert admins when services failed. No, they used everyone's favorite communications format, circa 2005: email. Can't reach the database? Send an email. Unhandled exception? Send an email. Handled exception? Better send an email, just in case. Sometimes they go to admins, sometimes they just go to an inbox used for logging.

    Let's look at how that worked.

    public void SendEMail(String receivers, String subject, String body)
    {
        try
        {
            System.Net.Mail.SmtpClient clnt = new System.Net.Mail.SmtpClient(ConfigurationManager.AppSettings["SmtpServer"]);
            clnt.Send(new System.Net.Mail.MailMessage(
                ConfigurationManager.AppSettings["Sender"], 
                ConfigurationManager.AppSettings["Receivers"], 
                subject, 
                body));
        }
        catch (Exception ex)
        {
            SendEMail(
                ConfigurationManager.AppSettings["ErrorLogAddress"],
                "An error has occurred while sending an email",
                ex.Message + "\n" + ex.StackTrace);
        }
    }
    

    They use the Dot Net SmtpClient class to connect to an SMTP server and send emails based on the configuration. So far so good, but what happens when we can't send an email because the email server is down? We'll get an exception, and what do we do with it?

    The same thing we do with every other exception: send an email.

    Ryan writes:

    Strangely enough, I've never heard of the service crashing or hanging. We must have a very good mail server!

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsSanta Brought a Kitty

    Author: Melissa Kobrin “Annie, it looks like Santa brought you one more present!” Annie looked up eagerly from her nest of torn wrapping paper and new toys. The Christmas tree twinkled behind her, and outside the window the sun was barely beginning to peak over the horizon. She gasped when Daddy walked into the living […]

    The post Santa Brought a Kitty appeared first on 365tomorrows.

    Planet DebianMark Brown: Seoul Trail revamp

    I regularly visit Seoul, and for the last couple of years I&aposve been doing segments from the Seoul Trail, a series of walks that add up to a 150km circuit around the outskirts of Seoul. If you like hiking I recommend it, it&aposs mostly through the hills and wooded areas surrounding the city or parks within the city and the bits I&aposve done thus far have mostly been very enjoyable. Everything is generally well signposted and easy to follow, with varying degrees of difficulty from completely flat paved roads to very hilly trails.

    The trail had been divided into eight segments but just after I last visited the trail was reorganised into 21 smaller ones. This was very sensible, the original segments mostly being about 10-20km and taking 3-6 hours (with the notable exception of section 8, which was 36km) which can be a bit much (especially that section 8, or section 1 which had about 1km of ascent in it overall). It does complicate matters if you&aposre trying to keep track of what you&aposve done already though so I&aposve put together a quick table:

    OriginalRevised
    11-3
    24-5
    36-8
    49-10
    511-12
    613-14
    715-16
    817-21

    This is all straightforward, the original segments had all been arranged to start and stop at metro stations (which I think explains the length of 8, the metro network is thin around Bukhansan what with it being an actual mountain) and the new segments are all straight subdivisions, but it&aposs handy to have it written down and I figured other people might find it useful.

    ,

    Planet DebianMatthew Garrett: Failing upwards: the Twitter encrypted DM failure

    Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

    But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

    This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

    To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

    Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

    Anyway. Use Signal.

    comment count unavailable comments

    Planet DebianChristian Kastner: 15th Anniversary of My First Debian Upload

    Time flies! 15 years ago, on 2010-03-18, my first upload to the Debian archive was accepted. Debian had replaced Windows as my primary OS in 2005, but it was only when I saw that package zd1211-firmware had been orphaned that I thought of becoming a contributor. I owned a Zyxel G-202 USB WiFi fob that needed said firmware, and as is so often is with open-source software, I was going to scratch my own itch. Bart Martens thankfully helped me adopt the package, and sponsored my upload.

    I then joined Javier Fernández-Sanguino Peña as a cron maintainer and upstream, and also worked within the Debian Python Applications, Debian Python Modules, and Debian Science Teams, where Jakub Wilk and Yaroslav Halchenko were kind enough to mentor me and eventually support my application to become a Debian Maintainer.

    Life intervened, and I was mostly inactive in Debian for the next two years. Upon my return in 2014, I had Vincent Cheng to thank for sponsoring most of my newer work, and for eventually supporting my application to become a Debian Developer. It was around that time that I also attended my first DebConf, in Portland, which remains one of my fondest memories. I had never been to an open-source software conference before, and DebConf14 really knocked it out of the park in so many ways.

    After another break, I returned in 2019 to work mostly on Python and machine learning libraries. In 2020, I finally completed a process that I had first started in 2012 but had never managed to finish before: converting cron from source format 1.0 (one big diff) to source format 3.0 (quilt) (a series of patches). This was a process where I converted 25 years worth of organic growth into a minimal series of logically grouped changes (more here). This was my white whale.

    In early 2023, shortly after the launch of ChatGPT which triggered an unprecedented AI boom, I started contributing to the Debian ROCm Team, where over the following year, I bootstrapped our CI at ci.rocm.debian.net. Debian's current tooling lack a way to express dependencies on specific hardware other than CPU ISA, nor does it have the means to run autopkgtests using such hardware. To get autopkgtests to make use of AMD GPUs in QEMU VMs and in containers, I had to fork autopkgtest, debci, and a few other components, as well as create a fair share of new tooling for ourselves. This worked out pretty well, and the CI has grown to support 17 different AMD GPU architectures. I will share more on this in upcoming posts.

    I have mentioned a few contributors by name, but I have countless others to thank for collaborations over the years. It has been a wonderful experience, and I look forward to many years more.

    Planet DebianDirk Eddelbuettel: RcppArmadillo 14.4.0-1 on CRAN: New Upstream

    armadillo image

    Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1234 other packages on CRAN, downloaded 38.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 617 times according to Google Scholar.

    Conrad released a new minor version 14.4.0 last month. That was preceding by several extensive rounds of reverse-dependency checks covering the 1200+ packages at CRAN. We eventually narrowed the impact down to just eight packages, and I opened issue #462 to manage the transition along with ‘GitHub-only’ release 14.4.0-0 of RcppArmadillo. Several maintainers responded very promptly and updated within days – this is truly appreciated. Yesterday the last package updated at CRAN coinciding nicely with our planned / intended upload to CRAN one month after the release. So this new release, at version -1, is now on CRAN. It brings the usual number of small improvements to Armadillo itself as well as updates to packaging.

    The changes since the last CRAN release are summarised below.

    Changes in RcppArmadillo version 14.4.0-1 (2025-03-17)

    • CRAN release having given a few packages time to catch-up to small upstream change as discussed and managed in #462

    • Updated bibliography, and small edits to sparse matrix vignette

    • Switched continuous integration action to r-ci with implicit bootstrap

    Changes in RcppArmadillo version 14.4.0-0 (2025-02-17) (GitHub Only)

    • Upgraded to Armadillo release 14.4.0 (Filtered Espresso)

      • Faster handling of pow() and square() within accu() and sum() expressions

      • Faster sort() and sort_index() for complex matrices

      • Expanded the field class with .reshape() and .resize() member functions

      • More efficient handling of compound expressions by sum(), reshape(), trans()

      • Better detection of vector expressions by pow(), imag(), conj()

    • The package generator helper function now supports additional DESCRIPTIONs

    • This release revealed a need for very minor changes for a handful reverse-dependency packages which will be organized via GitHub issue tracking

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianSergio Talens-Oliag: Using actions to build this site

    As promised on my previous post, on this entry I’ll explain how I’ve set up forgejo actions on the source repository of this site to build it using a runner instead of doing it on the public server using a webhook to trigger the operation.

    Setting up the system

    The first thing I’ve done is to disable the forgejo webhook call that was used to publish the site, as I don’t want to run it anymore.

    After that I added a new workflow to the repository that does the following things:

    • build the site using my hugo-adoc image.
    • push the result to a branch that contains the generated site (we do this because the server is already configured to work with the git repository and we can use force pushes to keep only the last version of the site, removing the need of extra code to manage package uploads and removals).
    • uses curl to send a notification to an instance of the webhook server installed on the remote server that triggers a script that updates the site using the git branch.

    Setting up the webhook service

    On the server machine we have installed and configured the webhook service to run a script that updates the site.

    To install the application and setup the configuration we have used the following script:

    #!/bin/sh
    
    set -e
    
    # ---------
    # VARIABLES
    # ---------
    ARCH="$(dpkg --print-architecture)"
    WEBHOOK_VERSION="2.8.2"
    DOWNLOAD_URL="https://github.com/adnanh/webhook/releases/download"
    WEBHOOK_TGZ_URL="$DOWNLOAD_URL/$WEBHOOK_VERSION/webhook-linux-$ARCH.tar.gz"
    WEBHOOK_SERVICE_NAME="webhook"
    # Files
    WEBHOOK_SERVICE_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.service"
    WEBHOOK_SOCKET_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.socket"
    WEBHOOK_TML_TEMPLATE="/srv/blogops/action/webhook.yml.envsubst"
    WEBHOOK_YML="/etc/webhook.yml"
    
    # Config file values
    WEBHOOK_USER="$(id -u)"
    WEBHOOK_GROUP="$(id -g)"
    WEBHOOK_LISTEN_STREAM="172.31.31.1:4444"
    
    # ----
    # MAIN
    # ----
    
    # Install binary from releases (on Debian only version 2.8.0 is available, but
    # I need the 2.8.2 version to support the systemd activation mode).
    
    curl -fsSL -o "/tmp/webhook.tgz" "$WEBHOOK_TGZ_URL"
    tar -C /tmp -xzf /tmp/webhook.tgz
    sudo install -m 755 "/tmp/webhook-linux-$ARCH/webhook" /usr/local/bin/webhook
    rm -rf "/tmp/webhook-linux-$ARCH" /tmp/webhook.tgz
    
    # Service file
    sudo sh -c "cat >'$WEBHOOK_SERVICE_FILE'" <<EOF
    [Unit]
    Description=Webhook server
    [Service]
    Type=exec
    ExecStart=webhook -nopanic -hooks $WEBHOOK_YML
    User=$WEBHOOK_USER
    Group=$WEBHOOK_GROUP
    EOF
    
    # Socket config
    sudo sh -c "cat >'$WEBHOOK_SOCKET_FILE'" <<EOF
    [Unit]
    Description=Webhook server socket
    [Socket]
    # Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
    FreeBind=true
    # Set ListenStream to the IP and port you want to listen on
    ListenStream=$WEBHOOK_LISTEN_STREAM
    [Install]
    WantedBy=multi-user.target
    EOF
    
    # Config file
    BLOGOPS_TOKEN="$(uuid)" \
      envsubst <"$WEBHOOK_TML_TEMPLATE" | sudo sh -c "cat >$WEBHOOK_YML"
    chmod 0640 "$WEBHOOK_YML"
    chwon "$WEBHOOK_USER:$WEBHOOK_GROUP" "$WEBHOOK_YML"
    
    # Restart and enable service
    sudo systemctl daemon-reload
    sudo systemctl stop "$WEBHOOK_SERVICE_NAME.socket"
    sudo systemctl start "$WEBHOOK_SERVICE_NAME.socket"
    sudo systemctl enable "$WEBHOOK_SERVICE_NAME.socket"
    
    # ----
    # vim: ts=2:sw=2:et:ai:sts=2

    As seen on the code, we’ve installed the application using a binary from the project repository instead of a package because we needed the latest version of the application to use systemd with socket activation.

    The configuration file template is the following one:

    - id: "update-blogops"
      execute-command: "/srv/blogops/action/bin/update-blogops.sh"
      command-working-directory: "/srv/blogops"
      trigger-rule:
        match:
          type: "value"
          value: "$BLOGOPS_TOKEN"
          parameter:
            source: "header"
            name: "X-Blogops-Token"

    The version on /etc/webhook.yml has the BLOGOPS_TOKEN adjusted to a random value that has to exported as a secret on the forgejo project (see later).

    Once the service is started each time the action is executed the webhook daemon will get a notification and will run the following update-blogops.sh script to publish the updated version of the site:

    #!/bin/sh
    
    set -e
    
    # ---------
    # VARIABLES
    # ---------
    
    # Values
    REPO_URL="ssh://git@forgejo.mixinet.net/mixinet/blogops.git"
    REPO_BRANCH="html"
    REPO_DIR="public"
    
    MAIL_PREFIX="[BLOGOPS-UPDATE-ACTION] "
    # Address that gets all messages, leave it empty if not wanted
    MAIL_TO_ADDR="blogops@mixinet.net"
    
    # Directories
    BASE_DIR="/srv/blogops"
    
    PUBLIC_DIR="$BASE_DIR/$REPO_DIR"
    NGINX_BASE_DIR="$BASE_DIR/nginx"
    PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"
    
    ACTION_BASE_DIR="$BASE_DIR/action"
    ACTION_LOG_DIR="$ACTION_BASE_DIR/log"
    
    # Files
    OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
    ACTION_LOGFILE_PATH="$ACTION_LOG_DIR/$OUTPUT_BASENAME.log"
    
    # ---------
    # Functions
    # ---------
    
    action_log() {
      echo "$(date -R) $*" >>"$ACTION_LOGFILE_PATH"
    }
    
    action_check_directories() {
      for _d in "$ACTION_BASE_DIR" "$ACTION_LOG_DIR"; do
        [ -d "$_d" ] || mkdir "$_d"
      done
    }
    
    action_clean_directories() {
      # Try to remove empty dirs
      for _d in "$ACTION_LOG_DIR" "$ACTION_BASE_DIR"; do
        if [ -d "$_d" ]; then
          rmdir "$_d" 2>/dev/null || true
        fi
      done
    }
    
    mail_success() {
      to_addr="$MAIL_TO_ADDR"
      if [ "$to_addr" ]; then
        subject="OK - updated blogops site"
        mail -s "${MAIL_PREFIX}${subject}" "$to_addr" <"$ACTION_LOGFILE_PATH"
      fi
    }
    
    mail_failure() {
      to_addr="$MAIL_TO_ADDR"
      if [ "$to_addr" ]; then
        subject="KO - failed to update blogops site"
        mail -s "${MAIL_PREFIX}${subject}" "$to_addr" <"$ACTION_LOGFILE_PATH"
      fi
      exit 1
    }
    
    # ----
    # MAIN
    # ----
    
    ret="0"
    
    # Check directories
    action_check_directories
    
    # Go to the base directory
    cd "$BASE_DIR"
    
    # Remove the old build dir if present
    if [ -d "$PUBLIC_DIR" ]; then
      rm -rf "$PUBLIC_DIR"
    fi
    
    # Update the repository checkout
    action_log "Updating the repository checkout"
    git fetch --all >>"$ACTION_LOGFILE_PATH" 2>&1 || ret="$?"
    if [ "$ret" -ne "0" ]; then
      action_log "Failed to update the repository checkout"
      mail_failure
    fi
    
    # Get it from the repo branch & extract it
    action_log "Downloading and extracting last site version using 'git archive'"
    git archive --remote="$REPO_URL" "$REPO_BRANCH" "$REPO_DIR" \
      | tar xf - >>"$ACTION_LOGFILE_PATH" 2>&1 || ret="$?"
    
    # Fail if public dir was missing
    if [ "$ret" -ne "0" ] || [ ! -d "$PUBLIC_DIR" ]; then
      action_log "Failed to download or extract site"
      mail_failure
    fi
    
    # Remove old public_html copies
    action_log 'Removing old site versions, if present'
    find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
      -exec rm -rf {} \; >>"$ACTION_LOGFILE_PATH" 2>&1 || ret="$?"
    if [ "$ret" -ne "0" ]; then
      action_log "Removal of old site versions failed"
      mail_failure
    fi
    # Switch site directory
    TS="$(date +%Y%m%d-%H%M%S)"
    if [ -d "$PUBLIC_HTML_DIR" ]; then
      action_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
      mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$ACTION_LOGFILE_PATH" 2>&1 ||
        ret="$?"
    fi
    if [ "$ret" -eq "0" ]; then
      action_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
      mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$ACTION_LOGFILE_PATH" 2>&1 ||
        ret="$?"
    fi
    if [ "$ret" -ne "0" ]; then
      action_log "Site switch failed"
      mail_failure
    else
      action_log "Site updated successfully"
      mail_success
    fi
    
    # ----
    # vim: ts=2:sw=2:et:ai:sts=2

    The hugo-adoc workflow

    The workflow is defined in the .forgejo/workflows/hugo-adoc.yml file and looks like this:

    name: hugo-adoc
    
    # Run this job on push events to the main branch
    on:
      push:
        branches:
          - 'main'
    
    jobs:
      build-and-push:
        if: ${{ vars.BLOGOPS_WEBHOOK_URL != '' && secrets.BLOGOPS_TOKEN != '' }}
        runs-on: docker
        container:
          image: forgejo.mixinet.net/oci/hugo-adoc:latest
        # Allow the job to write to the repository (not really needed on forgejo)
        permissions:
          contents: write
        steps:
          - name: Checkout the repo
            uses: actions/checkout@v4
            with:
              submodules: 'true'
          - name: Build the site
            shell: sh
            run: |
              rm -rf public
              hugo
          - name: Push compiled site to html branch
            shell: sh
            run: |
              # Set the git user
              git config --global user.email "blogops@mixinet.net"
              git config --global user.name "BlogOps"
              # Create a new orphan branch called html (it was not pulled by the
              # checkout step)
              git switch --orphan html
              # Add the public directory to the branch
              git add public
              # Commit the changes
              git commit --quiet -m "Updated site @ $(date -R)" public
              # Push the changes to the html branch
              git push origin html --force
              # Switch back to the main branch
              git switch main
          - name: Call the blogops update webhook endpoint
            shell: sh
            run: |
              HEADER="X-Blogops-Token: ${{ secrets.BLOGOPS_TOKEN }}"
              curl --fail -k -H "$HEADER" ${{ vars.BLOGOPS_WEBHOOK_URL }}

    The only relevant thing is that we have to add the BLOGOPS_TOKEN variable to the project secrets (its value is the one included on the /etc/webhook.yml file created when installing the webhook service) and the BLOGOPS_WEBHOOK_URL project variable (its value is the URL of the webhook server, in my case http://172.31.31.1:4444/hooks/update-blogops); note that the job includes the -k flag on the curl command just in case I end up using TLS on the webhook server in the future, as discussed previously.

    Conclusion

    Now that I have forgejo actions on my server I no longer need to build the site on the public server as I did initially, a good thing when the server is a small OVH VPS that only runs a couple of containers and a web server directly on the host.

    I’m still using a notification system to make the server run a script to update the site because that way the forgejo server does not need access to the remote machine shell, only the webhook server which, IMHO, is a more secure setup.

    Planet DebianPetter Reinholdtsen: New theora release 1.2.0beta1 after almost 15 years

    When I a few days ago discovered that a security problem reported against the theora library last year was still not fixed, and because I was already up to speed on Xiph development, I decided it was time to wrap up a new theora release. This new release was tagged in the Xiph gitlab theora instance Saturday. You can fetch the new release from the Theora home page.

    The list of changes since The 1.2.0alpha1 release from the CHANGES file in the tarball look like this:

    libteora 1.2.0beta1 (2025 March 15)

    • Bumped minor SONAME versions as methods changed constness of arguments.
    • Updated libogg dependency to version 1.3.4 for ogg_uint64_t.
    • Updated doxygen setup.
    • Updated autotools setup and support scripts (#1467 #1800 #1987 #2318 #2320).
    • Added support for RISC OS.
    • Fixed mingw build (#2141).
    • Improved ARM support.
    • Converted SCons setup to work with Python 3.
    • Introduced new configure options --enable-mem-constraint and --enable-gcc-sanitizers.
    • Fixed all known compiler warnings and errors from gcc and clang.
    • Improved examples for stability and correctness.
    • Variuos speed, bug fixes and code quality improvements.
    • Fixed build problem with Visual Studio (#2317).
    • Avoids undefined bit shift of signed numbers (#2321, #2322).
    • Avoids example encoder crash on bogus audio input (#2305).
    • Fixed musl linking issue with asm enabled (#2287).
    • Fixed some broken clamping in rate control (#2229).
    • Added NULL check _tc and _setup even for data packets (#2279).
    • Fixed mismatched oc_mb_fill_cmapping11 signature (#2068).
    • Updated the documentation for theora_encode_comment() (#726).
    • Adjusted build to Only link libcompat with dump_video (#1587).
    • Corrected an operator precedence error in the visualization code (#1751).
    • Fixed two spelling errors in the comments (#1804).
    • Avoid negative bit shift operation in huffdec.c (CVE-2024-56431).
    • Improved library documentation and specification text.
    • Adjusted library dependencies so libtheoraenc do not depend on libtheoradec.
    • Handle fallout from CVE-2017-14633 in libvorbis, check return value in encoder_example and transcoder_example.

    There are a few bugs still being investigated, and my plan is to wrap up a final 1.2.0 release two weekends from now.

    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    Worse Than FailureCodeSOD: Spaced Out Prefix

    Alex had the misfortune to work on the kind of application which has forms with gigantic piles of fields, stuffed haphazardly into objects. A single form could easily have fifty or sixty fields for the user to interact with.

    That leads to C# code like this:

     private static String getPrefix(AV_Suchfilter filter)
    {
    	String pr = String.Empty;
    	try
    	{
    		int maxLength = 0;
    		if (filter.Angebots_id != null) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_angebotsID.Length); }
    		if (filter.InternesKennzeichen != null) { if (filter.InternesKennzeichen.Trim() != String.Empty) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_internesKennzeichen.Length); } }
    		if (filter.Angebotsverantwortlicher_guid != null) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_angebotsverantwortlicher.Length); }
    
    		// Do this another 50 times....
    		// and then ....
    
    		int counter = 0;
    		while (counter < maxLength)
    		{
    			pr += " ";
    			counter++;
    		}
    	}
    	catch (Exception error)
    	{
    		ErrorForm frm = new ErrorForm(error);
    		frm.ShowDialog();
    	}
    	return pr;
    }
    

    The "Do this another 50 times" is doing a lot of heavy lifting in here. What really infuriates me about it, though, which we can see here, is that not all of the fields we're looking at are parameters to this function. And because the function here is static, they're not instance members either. I assume AV_MessagesTexte is basically a global of text labels, which isn't a bad way to manage such a thing, but functions should still take those globals as parameters so you can test them.

    I'm kidding, of course. This function has never been tested.

    Aside from a gigantic pile of string length comparisons, what does this function actually do? Well, it returns a new string which is a number of spaces exactly equal to the length of the longest string. And the way we build that output string is not only through string concatenation, but the use of a while loop where a for loop makes more sense.

    Also, just… why? Why do we need a spaces-only-string the length of another string? Even if we're trying to do some sort of text layout, that seems like a bad way to do whatever it is we're doing, and also if that's the case, why is it called getPrefix? WHY IS OUR PREFIX A STRING OF SPACES THE LENGTH OF OUR FIELD? HOW IS THAT A PREFIX?

    I feel like I'm going mad.

    But the real star of this horrible mess, in my opinion, is the exception handling. Get an exception? Show the user a form! There's no attempt to decide if or how we could recover from this error, we just annoy the user with it.

    Which isn't just unique to this function. Notice the getmaxLength function? It's really a max and it looks like this:

    private static int getmaxLength(int old, int current)
    {
    	int result = old;
    	try
    	{
    		if (current > old)
    		{
    			result = current;
    		}
    	}
    	catch (Exception error)
    	{
    		ErrorForm frm = new ErrorForm(error);
    		frm.ShowDialog();
    	}
    	return result;
    }
    

    What's especially delightful here is that this function couldn't possibly throw an exception. And you know what that tells me? This try/catch/form pattern is just their default error handling. They spam this everywhere, in every function, and the tech lead or architect pats themselves on the back for ensuring that the application "never crashes!" all the while annoying the users with messages they can't do anything about.

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsEveline

    Author: Majoki She crouched in the foliage at the river’s edge and watched the young man. He was not aware of her presence and she found that comforting. It was unusual for her to feel comforted or otherwise. She had only recently become sentient, and it had been an alarming experience. To simply be one […]

    The post Eveline appeared first on 365tomorrows.

    ,

    Cryptogram Is Security Human Factors Research Skewed Towards Western Ideas and Habits?

    Really interesting research: “How WEIRD is Usable Privacy and Security Research?” by Ayako A. Hasegawa Daisuke Inoue, and Mitsuaki Akiyama:

    Abstract: In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries. This WEIRD skew may hinder understanding of diverse populations and their cultural differences. The usable privacy and security (UPS) field has inherited many research methodologies from research on human factor fields. We conducted a literature review to understand the extent to which participant samples in UPS papers were from WEIRD countries and the characteristics of the methodologies and research topics in each user study recruiting Western or non-Western participants. We found that the skew toward WEIRD countries in UPS is greater than that in HCI. Geographic and linguistic barriers in the study methods and recruitment methods may cause researchers to conduct user studies locally. In addition, many papers did not report participant demographics, which could hinder the replication of the reported studies, leading to low reproducibility. To improve geographic diversity, we provide the suggestions including facilitate replication studies, address geographic and linguistic issues of study/recruitment methods, and facilitate research on the topics for non-WEIRD populations.

    The moral may be that human factors and usability needs to be localized.

    Cryptogram Improvements in Brute Force Attacks

    New paper: “GPU Assisted Brute Force Cryptanalysis of GPRS, GSM, RFID, and TETRA: Brute Force Cryptanalysis of KASUMI, SPECK, and TEA3.”

    Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are many standards and real-world applications that use shorter keys. In order to estimate the actual threat imposed by using those short keys, precise estimates for attacks are crucial.

    In this work we provide optimized implementations of several widely used algorithms on GPUs, leading to interesting insights on the cost of brute force attacks on several real-word applications.

    In particular, we optimize KASUMI (used in GPRS/GSM),SPECK (used in RFID communication), andTEA3 (used in TETRA). Our best optimizations allow us to try 235.72, 236.72, and 234.71 keys per second on a single RTX 4090 GPU. Those results improve upon previous results significantly, e.g. our KASUMI implementation is more than 15 times faster than the optimizations given in the CRYPTO’24 paper [ACC+24] improving the main results of that paper by the same factor.

    With these optimizations, in order to break GPRS/GSM, RFID, and TETRA communications in a year, one needs around 11.22 billion, and 1.36 million RTX 4090GPUs, respectively.

    For KASUMI, the time-memory trade-off attacks of [ACC+24] can be performed with142 RTX 4090 GPUs instead of 2400 RTX 3090 GPUs or, when the same amount of GPUs are used, their table creation time can be reduced to 20.6 days from 348 days,crucial improvements for real world cryptanalytic tasks.

    Attacks always get better; they never get worse. None of these is practical yet, and they might never be. But there are certainly more optimizations to come.

    Worse Than FailureToo Many Red Flags

    Fresh out of university, Remco accepted a job that allowed him to relocate to a different country. While entering the workforce for the first time, he was also adjusting to a new home and culture, which is probably why the red flags didn't look quite so red.

    The trouble had actually begun during his interview. While being questioned about his own abilities, Remco learned about Conglomcorp's healthy financial position, backed by a large list of clients. Everything seemed perfect, but Remco had a bad gut feeling he could neither explain nor shake off. Being young and desperate for a job, he ignored his misgivings and accepted the position. He hadn't yet learned how scarily accurate intuition often proves to be.

    Red Flags Tiananmen Square

    The second red flag was run up the mast at orientation. While teaching him about the company's history, one of the senior managers proudly mentioned that Conglomcorp had recently fired 50% of their workforce, and were still doing great. This left Remco feeling more concerned than impressed, but he couldn't reverse course now.

    Flag number three waved during onboarding, as Remco began to learn about the Java application he would be helping to develop. He'd been sitting at the cubicle of Lars, a senior developer, watching over his shoulder as Lars familiarized him with the application's UI.

    "Garbage Collection." Using his mouse, Lars circled a button in the interface labeled just that. "We added this to solve a bug some users were experiencing. Now we just tell everyone that if they notice any weird behavior in the application, they should click this button."

    Remco frowned. "What happens in the code when you click that?"

    "It calls System.gc()."

    But that wasn't even guaranteed to run! The Java virtual machine handled its own garbage collection. And in no universe did you want to put a worse-than-useless button in your UI and manipulate clients into thinking it did something. But Remco didn't feel confident enough to speak his mind. He kept silent and soldiered on.

    When Remco was granted access to the codebase, it got worse. The whole thing was a pile of spaghetti full of similar design brillance that mostly worked well enough to satisfy clients, although there was a host of bugs in the bug tracker, some of which had been rotting there for over 7 years. Remco had been given the unenviable task of fixing the oldest ones.

    Remco slogged through another few months. Eventually, he was tasked with implementing a new feature that was supposed to be similar to existing features already in the application. He checked these other features to see how they were coded, intending to follow the same pattern. As it turned out, they had all been implemented in a different, weird way. The wheel had been reinvented over and over, each time by someone who'd never even heard of a circle. None of the implementations looked like anything he ought to be imitating.

    Flummoxed, Remco approached Lars' cubicle and explained his findings. "How should I proceed?" he finally asked.

    Lars shrugged, and looked up from a running instance of the application. "I don't know." Lars turned back to his screen and pushed "Garbage Collect".

    Fairly soon after that enlightening experience, Remco moved on. Conglomcorp is still going, though whether they've retained their garbage collection button is anyone's guess.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsThe Blade Always Prepares

    Author: Julian Miles, Staff Writer There’s a smoking hole where my Rembrandt used to be. Not sure if it was blown in or out – I was too busy flying through the air to notice the finer points of the opening part of this assault. Dustin glances toward where I’m looking. “Sorry about the art. […]

    The post The Blade Always Prepares appeared first on 365tomorrows.

    ,

    365 TomorrowsPrometheus 900

    Author: Gary Duehr 02.17.2055/13:46: Ahead I can see a strip of poplars like a zipper between two fields of corn stubble, the frozen stalks shorn off; I sense the need to descend and I do, I dip my nose downward: the wind shears under my wing-flaps, the missile strapped to my frame drags me downward; […]

    The post Prometheus 900 appeared first on 365tomorrows.

    David BrinCompare DOGE to Gore's efficiency drive? Or Hamilton vs. Robspierre?

     I've been offline due to a family property calamity (all are healthy.) But this here set of facts (that you'll see nowhere else) needs urgently to be said about the DOGE 'efficiency' campaign: Nathan Gardels, at Noema Magazine*, offers excellent points about Evolutionary Stability vs. Revolutionary shake-ups, like Elon Musk's massive, Robspierre-style purge of civil servants. It's a distinction that both right and left ought to learn, especially as:

    1. No one - and I mean no one at all - appears to be mentioning the most-successful campaign ever to improve government efficiency. One that was 'evolutionary' and - at the time - recognized as a huge success, even if it did not use chainsaws.
    Al Gore's "Reinventing Government" endeavor used systematic methods to reduce duplication, redundancy and unnecessary procedures across all bureaucracies. The program won plaudits across the spectrum, including JD Powers awards. Solid metrics showed increased efficiency and service across all agencies. In particular Gore's RG program reversed the long plummet in veterans' opinions on the VA, transforming it into among the most loved and trusted of all U.S. institutions.

    Why does that earlier endeavor to increase government efficiency go entirely unmentioned today, amid mass, unexplained and destabilizing 'chainsaw' firings?
    Of course, time and the steady lobotomization of American discourse help to explain it. As does the heady rush of sanctimony rage, today's most-damaging addiction.

    As well, the utter difference in personalities between Al Gore and Elon Musk make them seem different species. Any comparison would thus stretch imaginations too thin, among modern journalists. Still, the contrast would seem to be worth offering. By someone. Anyone. Anywhere. Yes, I know. I ask too much.

    2. But more is at fault than the Right's mania or microcephalic journalism. The Left's fervid, revolutionary-transcendentalist impatience - which blatantly cost Kamala Harris election - fulminates contempt toward boring, undramatic efforts at incremental reform. This, despite the historical fact that 'incremental reforms' - often frustratingly slow - are exactly why the American Experiment has worked, while other revolutions soon devolved into chaos, often worse than the ancièn regimes they replaced.

    3. Alas and worst of all, there does not appear to be much - or even any - effort at tracking "who will gain most" from these Trumpian chainsaw slashes. Are there particular interest groups who will benefit?

    I cannot prove, yet... but I assert... that these "DOGE" slashes at Education and Health and CDC and other agencies have one core aim: to rile a vast range of opponents and thus distract from the administration's two top goals:

    FIRST: evisceration of the FBI, CIA and counter-intelligence services. (Now who benefits from that?)

    SECOND: to crush the IRS.

    Nothing has terrified the cheater wing of American oligarchy more than the 2021 Pelosi bill that ended 40 years of starvation at the Internal Revenue Service. Forty years preventing computer upgrades, software updates or the hiring of sufficient staff to audit upper-crust tax-dodgers and outright criminal gangs.

    Desperation to re-impose IRS starvation is (I maintain) the core goal for which that wing of oligarchy flooded the Trump Campaign with funds and cryptic aid. Now, the cheaters and their inheritance brat New Lords are getting their wish. And it's working. While cuts at CDC and Health and Education raise howls, you'll notice almost nary a peep from liberals and moderate about the poor, friendless IRS. And the cheater lords smile.

    4. Final point. Might anyone apply actual Outcome metrics comparing Al Gore's Reinventing Government campaign to Elon Musk's DOGE?

    What're 'metrics'? If both the left and right share one trait... it is utter contempt for nerdy stuff like facts.

    5. Compare outcomes from historical revolutions. It reduces to Hamilton, Adams & Jefferson vs. Robspierre, Lenin and Hitler. Look them up.


    PS... nothing better disproves the old saw that "Both parties are the same and both are corrupt."
    That is disproved absolutely and decisively by the IRS matter. Demp pols voted for the IRS to audit bigshots... including some of their own. Republicans live in daily terror of that possibility. Proved. QED. Step up with wager stakes.



    ,

    365 TomorrowsCatching

    Author: R. J. Erbacher I was going catching with my Grampie. He weren’t really my Grampie but that’s how I’d always referred to him. He was old, had a bushy white moustache, a scratchy beard and a big belly. And he was good to me, not like my Pa which tanned me all the time, […]

    The post Catching appeared first on 365tomorrows.

    Cryptogram Friday Squid Blogging: SQUID Band

    A bagpipe and drum band:

    SQUID transforms traditional Bagpipe and Drum Band entertainment into a multi-sensory rush of excitement, featuring high energy bagpipes, pop music influences and visually stunning percussion!

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    ,

    Krebs on SecurityClickFix: How to Infect Your PC in Three Easy Steps

    A clever malware deployment scheme first spotted in targeted attacks last year has now gone mainstream. In this scam, dubbed “ClickFix,” the visitor to a hacked or malicious website is asked to distinguish themselves from bots by pressing a combination of keyboard keys that causes Microsoft Windows to download password-stealing malware.

    ClickFix attacks mimic the “Verify You are a Human” tests that many websites use to separate real visitors from content-scraping bots. This particular scam usually starts with a website popup that looks something like this:

    This malware attack pretends to be a CAPTCHA intended to separate humans from bots.

    Clicking the “I’m not a robot” button generates a pop-up message asking the user to take three sequential steps to prove their humanity.

    Executing this series of keypresses prompts Windows to download password-stealing malware.

    Step 1 involves simultaneously pressing the keyboard key with the Windows icon and the letter “R,” which opens a Windows “Run” prompt that will execute any specified program that is already installed on the system.

    Step 2 asks the user to press the “CTRL” key and the letter “V” at the same time, which pastes malicious code from the site’s virtual clipboard.

    Step 3 — pressing the “Enter” key — causes Windows to download and launch malicious code through “mshta.exe,” a Windows program designed to run Microsoft HTML application files.

    “This campaign delivers multiple families of commodity malware, including XWorm, Lumma stealer, VenomRAT, AsyncRAT, Danabot, and NetSupport RAT,” Microsoft wrote in a blog post on Thursday. “Depending on the specific payload, the specific code launched through mshta.exe varies. Some samples have downloaded PowerShell, JavaScript, and portable executable (PE) content.”

    According to Microsoft, hospitality workers are being tricked into downloading credential-stealing malware by cybercriminals impersonating Booking.com. The company said attackers have been sending malicious emails impersonating Booking.com, often referencing negative guest reviews, requests from prospective guests, or online promotion opportunities — all in a bid to convince people to step through one of these ClickFix attacks.

    In November 2024, KrebsOnSecurity reported that hundreds of hotels that use booking.com had been subject to targeted phishing attacks. Some of those lures worked, and allowed thieves to gain control over booking.com accounts. From there, they sent out phishing messages asking for financial information from people who’d just booked travel through the company’s app.

    Earlier this month, the security firm Arctic Wolf warned about ClickFix attacks targeting people working in the healthcare sector. The company said those attacks leveraged malicious code stitched into the widely used physical therapy video site HEP2go that redirected visitors to a ClickFix prompt.

    An alert (PDF) released in October 2024 by the U.S. Department of Health and Human Services warned that the ClickFix attack can take many forms, including fake Google Chrome error pages and popups that spoof Facebook.

    ClickFix tactic used by malicious websites impersonating Google Chrome, Facebook, PDFSimpli, and reCAPTCHA. Source: Sekoia.

    The ClickFix attack — and its reliance on mshta.exe — is reminiscent of phishing techniques employed for years that hid exploits inside Microsoft Office macros. Malicious macros became such a common malware threat that Microsoft was forced to start blocking macros by default in Office documents that try to download content from the web.

    Alas, the email security vendor Proofpoint has documented plenty of ClickFix attacks via phishing emails that include HTML attachments spoofing Microsoft Office files. When opened, the attachment displays an image of Microsoft Word document with a pop-up error message directing users to click the “Solution” or “How to Fix” button.

    HTML files containing ClickFix instructions. Examples for attachments named “Report_” (on the left) and “scan_doc_” (on the right). Image: Proofpoint.

    Organizations that wish to do so can take advantage of Microsoft Group Policy restrictions to prevent Windows from executing the “run” command when users hit the Windows key and the “R” key simultaneously.

    Cryptogram Upcoming Speaking Engagements

    This is a current list of where and when I am scheduled to speak:

    The list is maintained on this page.

    Worse Than FailureError'd: No Time Like the Present

    I'm not entirely sure I understand the first item today, but maybe you can help. I pulled a couple of older items from the backlog to round out this timely theme.

    Rudi A. reported this Errord, chortling "Time flies when you're having fun, but it goes back when you're walking along the IJ river!" Is the point here that the walking time is quoted as 77 minutes total, but the overall travel time is less than that? I must say I don't recommend swimming the Ij in March, Rudi.

    1

     

    I had to go back quite a while for this submission from faithful reader Adam R., who chimed "I found a new type of datetime handling failure in this timestamp of 12:8 PM when checking my past payments at my medical provider." I hope he's still with us.

    4

     

    Literary critic Jay commented "Going back in time to be able to update your work after it gets published but before everyone else in your same space time fabric gets to see your mistakes, that's privilege." This kind of error is usually an artifact of Daylight Saving Time, but it's a day too late.

    0

     

    Lucky Luke H. can take his time with this deal. "The board is proud to approve a 20% discount for the next 8 millenia," he crowed.

    2

     

    At nearly the other end of the entire modern era, Carlos found himself with a nostalgic device. "Excel crashed. When it came back, it did so showing this update banner." Some programmer confused "restore state" with the English Restoration. Not that state, bub.

    3

     

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsFireworks

    Author: Jo Peace We always learn things too late. I remember the pine smell, the urgent fear as I hurried to assemble the close-in defense unit before the drones reached our position. A young voice snaps me back to the present. “Dad, why do you live alone in the mountains? Is it because people tease […]

    The post Fireworks appeared first on 365tomorrows.

    Cryptogram TP-Link Router Botnet

    There is a new botnet that is infecting TP-Link routers:

    The botnet can lead to command injection which then makes remote code execution (RCE) possible so that the malware can spread itself across the internet automatically. This high severity security flaw (tracked as CVE-2023-1389) has also been used to spread other malware families as far back as April 2023 when it was used in the Mirai botnet malware attacks. The flaw also linked to the Condi and AndroxGh0st malware attacks.

    […]

    Of the thousands of infected devices, the majority of them are concentrated in Brazil, Poland, the United Kingdom, Bulgaria and Turkey; with the botnet targeting manufacturing, medical/healthcare, services and technology organizations in the United States, Australia, China and Mexico.

    Details.

    ,

    Cryptogram RIP Mark Klein

    2006 AT&T whistleblower Mark Klein has died.

    Worse Than FailureCodeSOD: Don't Date Me

    I remember in some intro-level compsci class learning that credit card numbers were checksummed, and writing basic functions to validate those checksums as an exercize. I was young and was still using my "starter" credit card with a whopping limit of $500, so that was all news to me.

    Alex's company had a problem processing credit cards: they rejected a lot of credit cards as being invalid. The checksum code seemed to be working fine, so what could the problem be? Well, the problem became more obvious when someone's card worked one day, and stopped working the very next day, and they just so happened to be the first and last day of the month.

        protected function validateExpirationCcDate($i_year, $i_month) {
            return (((int)strftime('%y') <= $i_year) && ((int)strftime ('%m') <= $i_month))? true : false;
        }
    

    This function is horrible; because it uses strftime (instead of taking the comparison date and time as a parameter) it's not unit-testable. We're (ab)using casts to convert strings into integers so we can do our comparison. We're using a ternary to return a boolean value instead of just returning the result of the boolean expression.

    But of course, that's all the amuse bouche: the main course is the complete misunderstanding of basic logic. According to this code, a credit card is valid if the expiration year is less than or equal to the current year and the month is less than or equal to the current month. As this article goes live in March, 2025, this code would allow credit cards from April, 2026, as it should. But it would reject any cards with an expiration of February, 2028.

    Per Alex, "This is a credit card date validation that has been in use for ages."

    [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

    365 TomorrowsThe Meaning of Memories

    Author: Soramimi Hanarejima On my way home, I stop by the drugstore for a quick errand. But in the nootropics aisle, I’m thwarted by vacant shelf space. When I ask a clerk what happened to all the memorysyn, he tells me there’s been a recall. Some production issue has made recent lots more potent than […]

    The post The Meaning of Memories appeared first on 365tomorrows.

    ,

    Worse Than FailureCodeSOD: Expressing a Leak

    We previously discussed some whitespacing choices in a C++ codebase. Tim promised that there were more WTFs lurking in there, and has delivered one.

    Let's start with this class constructor:

    QBatch_arithExpr::QBatch_arithExpr(QBatch_unOp, const QBatch_snippet &, const QBatch_snippet &);
    

    You'll notice that this takes a parameter of type QBatch_unOp. What is that type? Well, it's an enumerated type describing the kind of operation this arithExpr represents. That is to say, they're not using real inheritance, but instead switching on the QBatch_unOp value to decide which code branch to execute- hand-made, home-grown artisanal inheritance. And while there are legitimate reasons to avoid inheritance, this is a clear case of "is-a" relationships, and it would allow compile-time checking of how you combine your types.

    Tim also points out the use of the "repugnant west const", which is maybe a strong way to word it, but definitely using only "east const" makes it a lot easier to understand what the const operator does. It's worth noting that in this example, the second parameters is a const reference (not a reference to a const value).

    Now, they are using inheritance, just not in that specific case:

    class QBatch_paramExpr : public QBatch_snippet {...};
    

    There's nothing particularly wrong with this, but we're going to use this parameter expression in a moment.

    QBatch_arithExpr* Foo(QBatch_snippet *expr) {
      // snip
      QBatch_arithExpr *derefExpr = new QBatch_arithExpr(enum_tag1, *(new QBatch_paramExpr(paramId)));
      assert(derefExpr);
      return new QBatch_arithExpr(enum_tag2, *expr, *derefExpr);
    }
    

    Honestly, in C++ code, seeing a pile of "*" operators and raw pointers is a sign that something's gone wrong, and this is no exception.

    Let's start with calling the QBatch_arithExpr constructor- we pass it *(new QBatch_paramExpr(paramId)), which is a multilayered "oof". First, the new operator will heap allocate and construct an object, and return a pointer to that object. We then dereference that pointer, and pass the value as a reference to the constructor. This is an automatic memory leak; because we never trap the pointer, we never have the opportunity to release that memory. Remember kids, in C/C++ you need clear ownership semantics and someone needs to be responsible for deallocating all of the allocated memory- every new needs a delete, in this case.

    Now, new QBatch_arithExpr(...) will also return a pointer, which we put in derefExpr. We then assert on that pointer, confirming that it isn't null. Which… it can't be. A constructor may fail and throw an exception, but you'll never get a null (now, I'm sure a sufficiently motivated programmer can mix nothrow and -fno-exceptions to get constructors to return null, but that's not happening here, and shouldn't happen anywhere).

    Then we dereference that pointer and pass it to QBatch_arithExpr- creating another memory leak. Two memory leaks in three lines of code, where one line is an assert, is fairly impressive.

    Elsewhere in the code, shared_pointer objects are used, wit their names aliased to readable types, aka QBatch_arithExpr::Ptr, and if that pattern were followed here, the memory leaks would go away.

    As Tim puts it: "Some folks never quite escaped their Java background," and in this case, I think it shows. Objects are allocated with new, but never deleted, as if there's some magical garbage collector which is going to find the unused objects and free them.

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsGone In A Flash

    Author: Lewis Richards Two Shuttles slashed through the sheeting rain, trailed by twin comet tails of super heated plasma vaporising any raindrops unfortunate enough to meet them on their spiralling descent toward the fluctuating lights of the colony they raced toward. It had been three days since the Ark-ship above lost contact with the colonists […]

    The post Gone In A Flash appeared first on 365tomorrows.

    Cryptogram China, Russia, Iran, and North Korea Intelligence Sharing

    Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op:

    Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could drive these states toward a more formalized intell-sharing partnership. Such a “Four Eyes” alliance would be motivated by common adversaries and strategic interests, including an enhanced capacity to resist economic sanctions and support proxy conflicts.

    ,

    Krebs on SecurityMicrosoft: 6 Zero-Days in March 2025 Patch Tuesday

    Microsoft today issued more than 50 security updates for its various Windows operating systems, including fixes for a whopping six zero-day vulnerabilities that are already seeing active exploitation.

    Two of the zero-day flaws include CVE-2025-24991 and CVE-2025-24993, both vulnerabilities in NTFS, the default file system for Windows and Windows Server. Both require the attacker to trick a target into mounting a malicious virtual hard disk. CVE-2025-24993 would lead to the possibility of local code execution, while CVE-2025-24991 could cause NTFS to disclose portions of memory.

    Microsoft credits researchers at ESET with reporting the zero-day bug labeled CVE-2025-24983, an elevation of privilege vulnerability in older versions of Windows. ESET said the exploit was deployed via the PipeMagic backdoor, capable of exfiltrating data and enabling remote access to the machine.

    ESET’s Filip Jurčacko said the exploit in the wild targets only older versions of Windows OS: Windows 8.1 and Server 2012 R2. Although still used by millions, security support for these products ended more than a year ago, and mainstream support ended years ago. However, ESET notes the vulnerability itself also is present in newer Windows OS versions, including Windows 10 build 1809 and the still-supported Windows Server 2016.

    Rapid7’s lead software engineer Adam Barnett said Windows 11 and Server 2019 onwards are not listed as receiving patches, so are presumably not vulnerable.

    “It’s not clear why newer Windows products dodged this particular bullet,” Barnett wrote. “The Windows 32 subsystem is still presumably alive and well, since there is no apparent mention of its demise on the Windows client OS deprecated features list.”

    The zero-day flaw CVE-2025-24984 is another NTFS weakness that can be exploited by inserting a malicious USB drive into a Windows computer. Barnett said Microsoft’s advisory for this bug doesn’t quite join the dots, but successful exploitation appears to mean that portions of heap memory could be improperly dumped into a log file, which could then be combed through by an attacker hungry for privileged information.

    “A relatively low CVSSv3 base score of 4.6 reflects the practical difficulties of real-world exploitation, but a motivated attacker can sometimes achieve extraordinary results starting from the smallest of toeholds, and Microsoft does rate this vulnerability as important on its own proprietary severity ranking scale,” Barnett said.

    Another zero-day fixed this month — CVE-2025-24985 — could allow attackers to install malicious code. As with the NTFS bugs, this one requires that the user mount a malicious virtual hard drive.

    The final zero-day this month is CVE-2025-26633, a weakness in the Microsoft Management Console, a component of Windows that gives system administrators a way to configure and monitor the system. Exploiting this flaw requires the target to open a malicious file.

    This month’s bundle of patch love from Redmond also addresses six other vulnerabilities Microsoft has rated “critical,” meaning that malware or malcontents could exploit them to seize control over vulnerable PCs with no help from users.

    Barnett observed that this is now the sixth consecutive month where Microsoft has published zero-day vulnerabilities on Patch Tuesday without evaluating any of them as critical severity at time of publication.

    The SANS Internet Storm Center has a useful list of all the Microsoft patches released today, indexed by severity. Windows enterprise administrators would do well to keep an eye on askwoody.com, which often has the scoop on any patches causing problems. Please consider backing up your data before updating, and leave a comment below if you experience any issues applying this month’s updates.

    Cryptogram Silk Typhoon Hackers Indicted

    Lots of interesting details in the story:

    The US Department of Justice on Wednesday announced the indictment of 12 Chinese individuals accused of more than a decade of hacker intrusions around the world, including eight staffers for the contractor i-Soon, two officials at China’s Ministry of Public Security who allegedly worked with them, and two other alleged hackers who are said to be part of the Chinese hacker group APT27, or Silk Typhoon, which prosecutors say was involved in the US Treasury breach late last year.

    […]

    According to prosecutors, the group as a whole has targeted US state and federal agencies, foreign ministries of countries across Asia, Chinese dissidents, US-based media outlets that have criticized the Chinese government, and most recently the US Treasury, which was breached between September and December of last year. An internal Treasury report obtained by Bloomberg News found that hackers had penetrated at least 400 of the agency’s PCs and stole more than 3,000 files in that intrusion.

    The indictments highlight how, in some cases, the hackers operated with a surprising degree of autonomy, even choosing targets on their own before selling stolen information to Chinese government clients. The indictment against Yin Kecheng, who was previously sanctioned by the Treasury Department in January for his involvement in the Treasury breach, quotes from his communications with a colleague in which he notes his personal preference for hacking American targets and how he’s seeking to ‘break into a big target,’ which he hoped would allow him to make enough money to buy a car.

    Krebs on SecurityAlleged Co-Founder of Garantex Arrested in India

    Authorities in India today arrested the alleged co-founder of Garantex, a cryptocurrency exchange sanctioned by the U.S. government in 2022 for facilitating tens of billions of dollars in money laundering by transnational criminal and cybercriminal organizations. Sources close to the investigation told KrebsOnSecurity the Lithuanian national Aleksej Besciokov, 46, was apprehended while vacationing on the coast of India with his family.

    Aleksej Bešciokov, “proforg,” “iram”. Image: U.S. Secret Service.

    On March 7, the U.S. Department of Justice (DOJ) unsealed an indictment against Besciokov and the other alleged co-founder of Garantex, Aleksandr Mira Serda, 40, a Russian national living in the United Arab Emirates.

    Launched in 2019, Garantex was first sanctioned by the U.S. Treasury Office of Foreign Assets Control in April 2022 for receiving hundreds of millions in criminal proceeds, including funds used to facilitate hacking, ransomware, terrorism and drug trafficking. Since those penalties were levied, Garantex has processed more than $60 billion, according to the blockchain analysis company Elliptic.

    “Garantex has been used in sanctions evasion by Russian elites, as well as to launder proceeds of crime including ransomware, darknet market trade and thefts attributed to North Korea’s Lazarus Group,” Elliptic wrote in a blog post. “Garantex has also been implicated in enabling Russian oligarchs to move their wealth out of the country, following the invasion of Ukraine.”

    The DOJ alleges Besciokov was Garantex’s primary technical administrator and responsible for obtaining and maintaining critical Garantex infrastructure, as well as reviewing and approving transactions. Mira Serda is allegedly Garantex’s co-founder and chief commercial officer.

    Image: elliptic.co

    In conjunction with the release of the indictments, German and Finnish law enforcement seized servers hosting Garantex’s operations. A “most wanted” notice published by the U.S. Secret Service states that U.S. authorities separately obtained earlier copies of Garantex’s servers, including customer and accounting databases. Federal investigators say they also froze over $26 million in funds used to facilitate Garantex’s money laundering activities.

    Besciokov was arrested within the past 24 hours while vacationing with his family in Varkala, a major coastal city in the southwest Indian state of Kerala. An officer with the local police department in Varkala confirmed Besciokov’s arrest, and said the suspect will appear in a Delhi court on March 14 to face charges.

    Varkala Beach in Kerala, India. Image: Shutterstock, Dmitry Rukhlenko.

    The DOJ’s indictment says Besciokov went by the hacker handle “proforg.” This nickname corresponds to the administrator of a 20-year-old Russian language forum dedicated to nudity and crudity called “udaff.”

    Besciokov and Mira Serda are each charged with one count of conspiracy to commit money laundering, which carries a maximum sentence of 20 years in prison. Besciokov is also charged with one count of conspiracy to violate the International Economic Emergency Powers Act—which also carries a maximum sentence of 20 years in person—and with conspiracy to operate an unlicensed money transmitting business, which carries a maximum sentence of five years in prison.

    Worse Than FailureRepresentative Line: Broken Up With

    Marco found this wreck, left behind by a former co-worker:

    $("#image_sample").html('<i><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />No image selected, select an image to see how it looks in the banner!</i>');
    
    
    

    This code uses the JQuery library to find an element in the web page with the ID "image_sample", and then replaces its contents with this hard-coded blob of HTML.

    I really appreciate the use of self-closing, XHTML style BR tags, which was a fad between 2000 and 2002, but never truly caught on, and was basically forgotten by the time HTML5 dropped. But this developer insisted that self-closing tags were the "correct" way to write HTML.

    Pity they didn't put any thought in the "correct" way to add blank space to page beyond line breaks. Or the correct way to populate the DOM that isn't accessing the inner HTML of an element.

    At least this was a former co-worker.

    [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

    365 TomorrowsOrigin Story

    Author: Majoki Some seven thousand years ago a micrometeorite winged a pine cone, clipped the ear of a very surprised marmot, skewered a large oyster mushroom, and buried itself in the thick duff of a mountainous forest in the north Cascades. Stan Clutterdam knew none of that when he unceremoniously peed on the ancient impact […]

    The post Origin Story appeared first on 365tomorrows.

    ,

    Worse Than FailureCodeSOD: Where is the Validation At?

    As oft stated, the "right" way to validate emails is to do a bare minimum sanity check on format, and then send a verification message to the email address the user supplied; it's the only way to ensure that what they gave you isn't just syntactically valid, but is actually usable.

    But even that simple approach leaves places to go wrong. Take a look at this code, from Lana.

    public function getEmailValidationErrors($data): array
    {
         $errors = [];
         if (isset($data["email"]) && !empty($data["email"])) {
             if (!str_contains($data["email"], "@")) {
                 $error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
             }
             if (!str_contains($data["email"], ".")) {
                 $error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
             }
             if (strrpos($data["email"], "@") > strrpos($data["email"], ".")) {
                 $error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
             }
         }
         if (isset($data["email1"]) && !empty($data["email1"])) {
            if (!str_contains($data["email1"], "@")) {
                $error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
            }
            if (!str_contains($data["email1"], ".")) {
                $error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
            }
            if (strrpos($data["email1"], "@") > strrpos($data["email1"], ".")) {
                $error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
            }
        }
        if (isset($data["email2"]) && !empty($data["email2"])) {
            if (!str_contains($data["email2"], "@")) {
                $error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
            }
            if (!str_contains($data["email2"], ".")) {
                $error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
            }
            if (strrpos($data["email2"], "@") > strrpos($data["email2"], ".")) {
                $error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
            }
        }
        if (isset($data["email3"]) && !empty($data["email3"])) {
            if (!str_contains($data["email3"], "@")) {
                $error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
            }
            if (!str_contains($data["email3"], ".")) {
                $error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
            }
            if (strrpos($data["email3"], "@") > strrpos($data["email3"], ".")) {
                $error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
            }
        }
         return $errors;
    }
    

    Let's start with the obvious problem: repetition. This function doesn't validate simply one email, but four, by copy/pasting the same logic multiple times. Lana didn't supply the repeated blocks, just noted that they existed, so let's not pick on the bad names: "email1", etc.- that's just my placeholder. I assume it's different contact types for a customer, or similar.

    Now, the other problems range from trivial to comical. First, the PHP function empty returns true if the variable has a zero/falsy value or is not set, which means it implies an isset, making the first branch redundant. That's trivial.

    The way the checks get logged into the $error array, they can overwrite each other, meaning if you forget the "@" and the ".", it'll only complain about the ".", but if you forget the ".", it'll complain about not having a valid TLD (the "NO_DOT" error will never be output). That's silly.

    Finally, the $errors array is the return value, but the $error array is where we store our errors, meaning this function doesn't return anything in the first place. And that means that it's a email validation function which doesn't do anything at all, which honestly- probably for the best.

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    365 TomorrowsBystander

    Author: Julian Miles, Staff Writer They’re running about again, but at least they’re looking happy about it. When I – we – got here, there was running, but only grim faces. Has it only been six days? Can’t have been. Wait. Go through it. Day one would have been after I heard the crash during […]

    The post Bystander appeared first on 365tomorrows.

    Cryptogram Thousands of WordPress Websites Infected with Malware

    The malware includes four separate backdoors:

    Creating four backdoors facilitates the attackers having multiple points of re-entry should one be detected and removed. A unique case we haven’t seen before. Which introduces another type of attack made possibly by abusing websites that don’t monitor 3rd party dependencies in the browser of their users.

    The four backdoors:

    The functions of the four backdoors are explained below:

    • Backdoor 1, which uploads and installs a fake plugin named “Ultra SEO Processor,” which is then used to execute attacker-issued commands
    • Backdoor 2, which injects malicious JavaScript into wp-config.php
    • Backdoor 3, which adds an attacker-controlled SSH key to the ~/.ssh/authorized_keys file so as to allow persistent remote access to the machine
    • Backdoor 4, which is designed to execute remote commands and fetches another payload from gsocket[.]io to likely open a reverse shell.

    ,

    365 TomorrowsThe Great Oak

    Author: James Jarvis The green leaves of The Great Oak glistened in the starlight. The air was still and calming. It was exactly what Liza expected. She wandered over to the base of the tree whilst deep in thought. The beauty of The Great Oak was amplified by its location. Situated within its own room […]

    The post The Great Oak appeared first on 365tomorrows.

    ,

    365 TomorrowsThe Comforts of Home

    Author: Soramimi Hanarejima When you open the door, it’s like I’m looking at an old photo, you and the hallway tinged a sentimental amber by the redshift of the decades between us. “Do you want to come in?” you ask, voice muffled by all those years. “I just got some lasagna out of the oven.” […]

    The post The Comforts of Home appeared first on 365tomorrows.

    Krebs on SecurityFeds Link $150M Cyberheist to 2022 LastPass Hacks

    In September 2023, KrebsOnSecurity published findings from security researchers who concluded that a series of six-figure cyberheists across dozens of victims resulted from thieves cracking master passwords stolen from the password manager service LastPass in 2022. In a court filing this week, U.S. federal agents investigating a spectacular $150 million cryptocurrency heist said they had reached the same conclusion.

    On March 6, federal prosecutors in northern California said they seized approximately $24 million worth of cryptocurrencies that were clawed back following a $150 million cyberheist on Jan. 30, 2024. The complaint refers to the person robbed only as “Victim-1,” but according to blockchain security researcher ZachXBT the theft was perpetrated against Chris Larsen, the co-founder of the cryptocurrency platform Ripple. ZachXBT was the first to report on the heist.

    This week’s action by the government merely allows investigators to officially seize the frozen funds. But there is an important conclusion in this seizure document: It basically says the U.S. Secret Service and the FBI agree with the findings of the LastPass breach story published here in September 2023.

    That piece quoted security researchers who said they were witnessing six-figure crypto heists several times each month that all appeared to be the result of crooks cracking master passwords for the password vaults stolen from LastPass in 2022.

    “The Federal Bureau of Investigation has been investigating these data breaches, and law enforcement agents investigating the instant case have spoken with FBI agents about their investigation,” reads the seizure complaint, which was written by a U.S. Secret Service agent. “From those conversations, law enforcement agents in this case learned that the stolen data and passwords that were stored in several victims’ online password manager accounts were used to illegally, and without authorization, access the victims’ electronic accounts and steal information, cryptocurrency, and other data.”

    The document continues:

    “Based on this investigation, law enforcement had probable cause to believe the same attackers behind the above-described commercial online password manager attack used a stolen password held in Victim 1’s online password manager account and, without authorization, accessed his cryptocurrency wallet/account.”

    Working with dozens of victims, security researchers Nick Bax and Taylor Monahan found that none of the six-figure cyberheist victims appeared to have suffered the sorts of attacks that typically preface a high-dollar crypto theft, such as the compromise of one’s email and/or mobile phone accounts, or SIM-swapping attacks.

    They discovered the victims all had something else in common: Each had at one point stored their cryptocurrency seed phrase — the secret code that lets anyone gain access to your cryptocurrency holdings — in the “Secure Notes” area of their LastPass account prior to the 2022 breaches at the company.

    Bax and Monahan found another common theme with these robberies: They all followed a similar pattern of cashing out, rapidly moving stolen funds to a dizzying number of drop accounts scattered across various cryptocurrency exchanges.

    According to the government, a similar level of complexity was present in the $150 million heist against the Ripple co-founder last year.

    “The scale of a theft and rapid dissipation of funds would have required the efforts of multiple malicious actors, and was consistent with the online password manager breaches and attack on other victims whose cryptocurrency was stolen,” the government wrote. “For these reasons, law enforcement agents believe the cryptocurrency stolen from Victim 1 was committed by the same attackers who conducted the attack on the online password manager, and cryptocurrency thefts from other similarly situated victims.”

    Reached for comment, LastPass said it has seen no definitive proof — from federal investigators or others — that the cyberheists in question were linked to the LastPass breaches.

    “Since we initially disclosed this incident back in 2022, LastPass has worked in close cooperation with multiple representatives from law enforcement,” LastPass said in a written statement. “To date, our law enforcement partners have not made us aware of any conclusive evidence that connects any crypto thefts to our incident. In the meantime, we have been investing heavily in enhancing our security measures and will continue to do so.”

    On August 25, 2022, LastPass CEO Karim Toubba told users the company had detected unusual activity in its software development environment, and that the intruders stole some source code and proprietary LastPass technical information. On Sept. 15, 2022, LastPass said an investigation into the August breach determined the attacker did not access any customer data or password vaults.

    But on Nov. 30, 2022, LastPass notified customers about another, far more serious security incident that the company said leveraged data stolen in the August breach. LastPass disclosed that criminal hackers had compromised encrypted copies of some password vaults, as well as other personal information.

    Experts say the breach would have given thieves “offline” access to encrypted password vaults, theoretically allowing them all the time in the world to try to crack some of the weaker master passwords using powerful systems that can attempt millions of password guesses per second.

    Researchers found that many of the cyberheist victims had chosen master passwords with relatively low complexity, and were among LastPass’s oldest customers. That’s because legacy LastPass users were more likely to have master passwords that were protected with far fewer “iterations,” which refers to the number of times your password is run through the company’s encryption routines. In general, the more iterations, the longer it takes an offline attacker to crack your master password.

    Over the years, LastPass forced new users to pick longer and more complex master passwords, and they increased the number of iterations on multiple occasions by several orders of magnitude. But researchers found strong indications that LastPass never succeeded in upgrading many of its older customers to the newer password requirements and protections.

    Asked about LastPass’s continuing denials, Bax said that after the initial warning in our 2023 story, he naively hoped people would migrate their funds to new cryptocurrency wallets.

    “While some did, the continued thefts underscore how much more needs to be done,” Bax told KrebsOnSecurity. “It’s validating to see the Secret Service and FBI corroborate our findings, but I’d much rather see fewer of these hacks in the first place. ZachXBT and SEAL 911 reported yet another wave of thefts as recently as December, showing the threat is still very real.”

    Monahan said LastPass still hasn’t alerted their customers that their secrets—especially those stored in “Secure Notes”—may be at risk.

    “Its been two and a half years since LastPass was first breached [and] hundreds of millions of dollars has been stolen from individuals and companies around the globe,” Monahan said. “They could have encouraged users to rotate their credentials. They could’ve prevented millions and millions of dollars from being stolen by these threat actors. But  instead they chose to deny that their customers were are risk and blame the victims instead.”

    ,

    Cryptogram Rayhunter: Device to Detect Cellular Surveillance

    The EFF has created an open-source hardware tool to detect IMSI catchers: fake cell phone towers that are used for mass surveillance of an area.

    It runs on a $20 mobile hotspot.

    365 TomorrowsThrough His Window

    Author: Nageene Noor The world through Viktor Blackford’s window was quiet. Hannibal always started with the window, and it became a habit like an anchor, before he let himself sink into Viktor’s home. From where Hannibal observed, his whole life was mundane. Viktor was meticulously ordinary. Every evening, he cooked simple meals, worked at his […]

    The post Through His Window appeared first on 365tomorrows.

    Worse Than FailureError'd: Tomorrow

    It's only a day away!

    Punctual Robert F. never procrastinates. But I think now would be a good time for a change. He worries that "I better do something quick, before my 31,295 year deadline arrives."

    1

     

    Stewart suffers so, saying "Whilst failing to check in for a flight home on the TUI app (one of the largest European travel companies), their Harry Potter invisibility cloak slipped. Perhaps I'll just have to stay on holiday?" You have my permission, just tell the boss I said so.

    0

     

    Diligent Dan H. is in no danger of being replaced. Says Dan, "My coworker was having problems getting regular expressions to work in a PowerShell script. She asked Bing's Copilot for help - and was it ever helpful!"

    2

     

    PSU alum (I'm guessing) Justin W. was overwhelmed in Happy Valley. "I was just trying to find out when the game started. This is too much date math for my brain to figure out."

    3

     

    Finally, bug-loving Pieter caught this classic. "They really started with a blank slate for the newest update. I'm giving them a solid %f for the effort."

    4

     

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    Krebs on SecurityWho is the DOGE and X Technician Branden Spikes?

    At 49, Branden Spikes isn’t just one of the oldest technologists who has been involved in Elon Musk’s Department of Government Efficiency (DOGE). As the current director of information technology at X/Twitter and an early hire at PayPal, Zip2, Tesla and SpaceX, Spikes is also among Musk’s most loyal employees. Here’s a closer look at this trusted Musk lieutenant, whose Russian ex-wife was once married to Elon’s cousin.

    The profile of Branden Spikes on X.

    When President Trump took office again in January, he put the world’s richest man — Elon Musk — in charge of the U.S. Digital Service, and renamed the organization as DOGE. The group is reportedly staffed by at least 50 technologists, many of whom have ties to Musk’s companies.

    DOGE has been enabling the president’s ongoing mass layoffs and firings of federal workers, largely by seizing control over computer systems and government data for a multitude of federal agencies, including the Social Security Administration, the Department of Homeland Security, the Office of Personnel Management, and the Treasury Department.

    It is difficult to find another person connected to DOGE who has stronger ties to Musk than Branden Spikes. A native of California, Spikes initially teamed up with Musk in 1997 as a lead systems engineer for the software company Zip2, the first major venture for Musk. In 1999, Spikes was hired as director of IT at PayPal, and in 2002 he became just the fourth person hired at SpaceX.

    In 2012, Spikes launched Spikes Security, a software product that sought to create a compartmentalized or “sandboxed” web browser that could insulate the user from malware attacks. A review of spikes.com in the Wayback Machine shows that as far back as 1998, Musk could be seen joining Spikes for team matches in the online games Quake and Quake II. In 2016, Spikes Security was merged with another security suite called Aurionpro, with the combined company renamed Cyberinc.

    A snapshot of spikes.com from 1998 shows Elon Musk’s profile in Spike’s clan for the games Quake and Quake II.

    Spikes’s LinkedIn profile says he was appointed head of IT at X in February 2025. And although his name shows up on none of the lists of DOGE employees circulated by various media outlets, multiple sources told KrebsOnSecurity that Spikes was working with DOGE and operates within Musk’s inner circle of trust.

    In a conversation with KrebsOnSecurity, Spikes said he is dedicated to his country and to saving it from what he sees as certain ruin.

    “Myself, I was raised by a southern conservative family in California and I strongly believe in America and her future,” Spikes said. “This is why I volunteered for two months in DC recently to help DOGE save us from certain bankruptcy.”

    Spikes told KrebsOnSecurity that he recently decided to head back home and focus on his job as director of IT at X.

    “I loved it, but ultimately I did not want to leave my hometown and family back in California,” Spikes said of his tenure at DOGE. “After a couple of months it became clear that to continue helping I would need to move to DC and commit a lot more time, so I politely bowed out.”

    Prior to founding Spikes Security, Branden Spikes was married to a native Russian woman named Natalia whom he’d met at a destination wedding in South America in 2003.

    Branden and Natalia’s names are both on the registration records for the domain name orangetearoom[.]com. This domain, which DomainTools.com says was originally registered by Branden in 2009, is the home of a tax-exempt charity in Los Angeles called the California Russian Association.

    Here is a photo from a 2011 event organized by the California Russian Association, showing Branden and Natalia at one of its “White Nights” charity fundraisers:

    Branden and Natalia Spikes, on left, in 2011. The man on the far right is Ivan Y. Podvalov, a board member of the Kremlin-aligned Congress of Russian Americans (CRA). The man in the center is Feodor Yakimoff, director of operations at the Transib Global Sourcing Group, and chairman of the Russian Imperial Charity Balls, which works in concert with the Russian Heritage Foundation.

    In 2011, the Spikes couple got divorced, and Natalia changed her last name to Haldeman. That is not her maiden name, which appears to be “Libina.” Rather, Natalia acquired the surname Haldeman in 1998, when she married Elon Musk’s cousin.

    Reeve Haldeman is the son of Scott Haldeman, who is the brother of Elon Musk’s mother, Maye Musk. Divorce records show Reeve and Natalia officially terminated their marriage in 2007. Reeve Haldeman did not respond to a request for comment.

    A review of other domain names connected to Natalia Haldeman’s email address show she has registered more than a dozen domains over the years that are tied to the California Russian Association, and an apparently related entity called the Russian Heritage Foundation, Inc.:

    russianamericans.org
    russianamericanstoday.com
    russianamericanstoday.org
    russiancalifornia.org
    russianheritagefoundation.com
    russianheritagefoundation.org
    russianwhitenights.com
    russianwhitenights.org
    theforafoundation.org
    thegoldentearoom.com
    therussianheritagefoundation.org
    tsarinahome.com

    Ms. Haldeman did not respond to requests for comment. Her name and contact information appears in the registration records for these domains dating back to 2010, and a document published by ProPublica show that by 2016 Natalia Haldeman was appointed CEO of the California Russian Foundation.

    The domain name that bears both Branden’s and Natalia’s names — orangetearoom.com — features photos of Ms. Haldeman at fundraising events for the Russian foundation through 2014. Additional photos of her and many of the same people can be seen through 2023 at another domain she registered in 2010 — russianheritagefoundation.com.

    A photo from Natalia Haldeman’s Facebook page shows her mother (left) pictured with Maye Musk, Elon Musk’s mother, in 2022.

    The photo of Branden and Natalia above is from one such event in 2011 (tied to russianwhitenights.org, another Haldeman domain). The person on the right in that image — Ivan Y. Podvalov — appears in many fundraising event photos published by the foundation over the past decade. Podvalov is a board member of the Congress of Russian Americans (CRA), a nonprofit group that is known for vehemently opposing U.S. financial and legal sanctions against Russia.

    Writing for The Insider in 2022, journalist Diana Fishman described how the CRA has engaged in outright political lobbying, noting that the organization in June 2014 sent a letter to President Obama and the secretary of the United Nations, calling for an end to the “large-scale US intervention in Ukraine and the campaign to isolate Russia.”

    “The US military contingents must be withdrawn immediately from the Eastern European region, and NATO’s enlargement efforts and provocative actions against Russia must cease,â€� the message read.

    The Insider said the CRA director sent another two letters, this time to President Donald Trump, in 2017 and 2018.

    “One was a request not to sign a law expanding sanctions against Russia,” Fishman wrote. “The other regretted the expulsion of 60 Russian diplomats from the United States and urged not to jump to conclusions on Moscow’s involvement in the poisoning of Sergei Skripal.”

    The nonprofit tracking website CauseIQ.com reports that The Russian Heritage Foundation, Inc. is now known as Constellation of Humanity.

    The Russian Heritage Foundation and the California Russian Association both promote the interests of the Russian Orthodox Church. This page indexed by Archive.org from russiancalifornia.org shows The California Russian Foundation organized a community effort to establish an Orthodox church in Orange County, Calif.

    A press release from the Russian Orthodox Church Outside of Russia (ROCOR) shows that in 2021 the Russian Heritage Foundation donated money to organize a conference for the Russian Orthodox Church in Serbia.

    A review of the “Partners” listed on the Spikes’ jointly registered domain — orangetearoom.com — shows the organization worked with a marketing company called Russian American Media. Reporting by KrebsOnSecurity last year showed that Russian American Media also partners with the problematic people-search service Radaris, which was formed by two native Russian brothers in Massachusetts who have built a fleet of consumer data brokers and Russian affiliate programs.

    When asked about his ex-wife’s history, Spikes said she has a good heart and bears no ill-will toward anyone.

    “I attended several of Natalia’s social events over the years we were together and can assure you that she’s got the best intentions with those,” Spikes told KrebsOnSecurity. “There’s no funny business going on. It is just a way for those friendly immigrants to find resources amongst each other to help get settled in and chase the American dream. I mean, they’re not unlike the immigrants from other countries who come to America and try to find each other and help each other find others who speak the language and share in the building of their businesses here in America.”

    Spikes said his own family roots go back deeply into American history, sharing that his 6th great grandfather was Alexander Hamilton on his mom’s side, and Jessie James on his dad’s side.

    “My family roots are about as American as you can get,” he said. “I’ve also been entrusted with building and safeguarding Elon’s companies since 1999 and have a keen eye (as you do) for bad actors, so have enough perspective to tell you that Natalia has no bad blood and that she loves America.”

    Of course, this perspective comes from someone who has the utmost regard for the interests of the “special government employee” Mr. Musk, who has been bragging about tossing entire federal agencies into the “wood chipper,” and who recently wielded an actual chainsaw on stage while referring to it as the “chainsaw for bureaucracy.”

    “Elon’s intentions are good and you can trust him,” Spikes assured.

    A special note of thanks for research assistance goes to Jacqueline Sweet, an independent investigative journalist whose work has been published in The Guardian, Rolling Stone, POLITICO and The Intercept.

    ,

    ME8k Video Cards

    I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.

    The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.

    My theories as to why it doesn’t work are:

    • NVidia specs lie
    • My 8K cable isn’t really an 8K cable
    • Something weird happens converting DisplayPort to HDMI
    • The video card can only handle refresh rates for 8K that don’t match supported input for the TV

    To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.

    My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.

    The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.

    Any suggestions?

    LongNowA Logic for the Future

    💡
    JOIN US IN PERSON AND ONLINE for Stephen Heintz & Kim Stanley Robinson's Long Now Talk, A Logic For The Future: International Relations in the Age of Turbulence, on March 19, 02025 at 7 PM PT at the Cowell Theater in San Francisco.
    A Logic for the Future
    The Marshall Islands coping with the effects of climate change and rising sea levels. Credit: Asian Development Bank on Flickr

    Introduction

    A Logic for the Future

    Recent years have provided definitive evidence that we are living in an age of exceptional complexity and turbulence. The war of aggression raging in Ukraine has already taken as many as 500,000 lives, and prospects for a near-term resolution to the fighting are dim. The Middle East is once again convulsed by war. Another 182 significant violent conflicts are destroying lives and livelihoods across the globe — the highest number in more than three decades.1 Escalating great power competition threatens to trigger violent great power confrontation. Environmentally, heat waves, wildfires, and floods have also taken thousands of lives while causing enormous economic losses, disrupting food supplies across the world,2 and “turbocharging what is already the worst period of forced displacement and migration in history.”3 02023, the hottest year on record, was surpassed by 02024, the first year in which temperatures exceeded the Paris Agreement target of 1.5 degrees Celsius above preindustrial levels. The rapid loss of glacial and sea ice augurs a tipping point in sea-level rise. Political polarization is crippling many of the world’s advanced democracies, and authoritarianism is on the rise. In a time of growing demand for public sector services and investments, debt levels in both developing and developed economies have reached record highs. Environmental, economic, and political forecasts suggest that these challenges, as well as human and ecological suffering, will only become more difficult to surmount in the years ahead.

    A Logic for the Future
    United Nations Peace Bell Ceremony in observance of International Peace day. Photo by Rick Bajornas / UN Photo

    History is often told as a story of turbulence, and there have been periods, even in recent memory, of wider and more brutal warfare, genocide, violent revolution, and political repression. But what distinguishes this period in human history is the confluence of forces— political, geo-strategic, economic, social, technological, and environmental, as well as interactions among them — that fuel the turbulence that we see today. Many of the causes and consequences of presentday turmoil are transnational or even global in nature. These conflicts have no regard for borders and are not responsive to solutions devised and implemented by individual nation-states or the existing ecosystem of multilateral institutions. Furthermore, humankind is facing the possibility of three interrelated risks that may prove to be existential threats: (1) the accelerating climate crisis; (2) a new nuclear arms race between China, the United States, and Russia (along with the associated proliferation risks4); and (3) the advent of potentially hyper-disruptive technologies such as generative artificial intelligence (and the prospect of general artificial intelligence5), neuro-technology, and biomedical or biomanufacturing technologies “whose abuse and misuse could lead to catastrophe.”6

    What distinguishes this period in human history is the confluence of forces — political, geo-strategic, economic, social, technological, and environmental, as well as interactions among them — that fuel the turbulence that we see today.
    A Logic for the Future
    Vistula River, Poland. Photo by Shalith / iStock

    The institutions that have guided international relations and global problem solving since the mid-20th century are clearly no longer capable of addressing the challenges of the new millennium. They are inefficient, ineffective, anachronistic, and, in some cases, simply obsolete. As Roger Cohen of The New York Times noted, “With inequality worsening, food security worsening, energy security worsening, and climate change accelerating, more countries are asking what answers the post-01945 Western-dominated order can provide.”7

    Over millions of years, humankind has proven remarkably resilient, capable of innovating its way through periods of grave existential threat while simultaneously developing cultural, societal, and technological means of improving the human condition. Human advancements have given rise to nearly 31,000 languages, significantly prolonged life expectancy, lifted hundreds of millions out of abject poverty, and extended human rights to populations across the world. Human ingenuity landed a man on the moon and invented the internet. Through vision, creativity, and diligence, humankind can — and must — develop an international framework that can guide us toward a more peaceful, more humane, and more equitable global society, as well as a thriving planetary ecosystem, all by the end of this century.

    The challenge of designing a better international system is a difficult one, but choosing to ignore the necessity of reform is a far greater failure than striving and falling short.

    Readers of this paper may find some of the ideas presented to be idealistic or even utopian. But this essay is intended to address the question of what might be, not merely what can be. As proven throughout history, human consciousness endows us all with the ability to make changes that contribute to longer and better lives. The challenge of designing a better international system is a difficult one, but choosing to ignore the necessity of reform is a far greater failure than striving and falling short.

    History is replete with examples of hinge moments when change once thought improbable or even impossible occurs. Recent examples include Lyndon Johnson’s invocation of “We Shall Overcome” in his speech to Congress urging passage of the Civil Rights Bill, the transformation of South Korea into a vibrant democracy and competitive market economy, the fall of the Berlin Wall and collapse of Soviet communism, and Nelson Mandela’s “long walk to freedom.”

    Even in very dark moments, visionary leaders can pierce the darkness and imagine a brighter future. Franklin Roosevelt and Winston Churchill drafted the Atlantic Charter in August 01941 when most of the European continent had been conquered by Hitler, the United States was not yet at war, and the United Kingdom was fighting for its survival. The charter boldly articulated a vision for a postwar world in which all people could live in freedom from fear and want, and the nations of the world would eschew the use of force and work collectively to advance peace and economic prosperity. This vision, written on a destroyer off the coast of Newfoundland, served as a foundational step toward the creation of the United Nations in 01945.

    This essay is intended to address the question of what might be, not merely what can be.
    A Logic for the Future
    People's Climate March 02014. Photo by Robert van Waarden / Survival Media Agency

    Moments of profound challenge offer opportunity to convert today’s idealism into tomorrow’s realism. Writing in 01972, German historian and philosopher Hannah Arendt reminded us that we are not consigned to live with things as they are: “We are free to change the world and start something new in it.”8 This paper is offered in that spirit.

    Part I examines the origins and evolution of the logic that underlies today’s system of international relations and offers a revised logic for the future. Part II applies this new logic to the global political landscape and proposes alterations to the institutions and mechanisms of the current international system to better meet the global challenges of this century.

    A Logic for the Future
    Iran nuclear deal negotiators in Vienna, 02015. Photo by Dragan Tatic

    Part I. The Logic of International Relations

    In 01980, the management theorist and consultant Peter Drucker authored a book called Managing in Turbulent Times. Drucker’s central thesis was that the greatest danger in times of turbulence is not turbulence itself; rather, it is “acting with yesterday’s logic.” This fairly describes our current predicament. Though we are faced with multiple, diverse, complex, and possibly even existential challenges, we stubbornly continue to respond with yesterday’s logic and the institutional framework it inspired. The logic of the present remains rooted in the logic of the past, with many of its core elements originating from the first known international treaties in Mesopotamia or those between warring Greek city-states. Many others were first articulated and codified in 17–19th century Europe; for example, the 01648 Peace of Westphalia is widely regarded as the international legal framework that birthed the enduring concept of nation-state sovereignty which, three centuries later, was enshrined in the United Nations Charter. Over time, our legacy systems have grown from these and other roots to become the international institutions of the present day. 

    The greatest danger in times of turbulence is not turbulence itself; rather, it is “acting with yesterday’s logic.”
    A Logic for the Future
    Female leadership at COP21 in Paris, 02015. Photo by Arnaud Bouissou / MEDDE
    Any future system will almost certainly be an amalgam of the ancient, modern, and new — the combination of these elements will be the foundation of its effectiveness and resilience.

    As we work to devise a global framework fit to purpose for the extraordinary challenges of this century, it is essential to examine the most important elements of the logic of the past to determine which of these should be retained, which should be revised, which should be retired, and what new concepts will be required. Any future system will almost certainly be an amalgam of the ancient, modern, and new — the combination of these elements will be the foundation of its effectiveness and resilience: resonating with human experience while also inspiring the future. A deeper understanding of the roots and evolution of the existing international system will allow us to develop ideas for a new global framework that will enable us to manage this age of turbulence.

    Logic Inventory

    When seeking to understand a complex system, it can be useful to take an inventory of its most important elements. An examination of the roots and evolution of the existing “rules-based international order” reveals 12 concepts that together can be understood as the core elements of the “logic of the past.” These concepts continue to guide contemporary international relations and global problem solving. The following logic inventory itemizes these concepts and suggests revisions for a logic of the future that can help us better manage the challenges of this century.

    A Logic for the Future


    A Logic for the Future

    It is no surprise that humans have assumed a position of dominance in the hierarchy of life. We have yet to encounter another species with a comparable combination of physical and intellectual capacities. We have employed the advantages of humankind to birth spectacular discoveries and inventions, leading to the organization of society and the building of the modern world, all the while assuming that the rest of nature is ours to harness with the goal of sustaining and improving the human condition. However, human activity, most notably the burning of fossil fuels, threatens the very viability of life on our planet. We are approaching multiple climate-related tipping points, and Earth’s biosystem is experiencing a profound crisis encapsulated by a magnitude of biodiversity loss often referred to as the Sixth Mass Extinction. Global biodiversity is being lost more rapidly than at any other time in recorded human history.9

    The logic of the future must see human beings as a part of nature rather than apart from it. We must see our existence within the extraordinary web of the entire community of life10 on our planet, which includes some eight million other species. Our lives and livelihoods are dependent on this vibrant biodiversity, and we endanger the survival of our species when we despoil or deplete it. Biodiversity conservation is both a moral imperative as well as a material requirement to ensure a sustainable planetary ecosystem and a thriving human society.

    The logic of the future must see human beings as a part of nature rather than apart from it.
    A Logic for the Future
    Ecotourism in Yasuní National Park, Ecuador. Photo by SL_Photography / iStock
    A Logic for the Future

    The concept of sovereignty has been central to international relations ever since the Peace of Westphalia sought to resolve the territorial and religious disputes of the Thirty Years’ War (the most savage war in European history at the time). Paired with the principle of non-interference in states’ internal affairs, the concept of sovereignty was refined and reinforced by the great 19th-century diplomats who, in the Congress of Vienna and the Concert of Europe (01814–01815), brought an end to the Napoleonic Wars and laid the foundation for a remarkably durable peace that heralded rapid technological and economic progress. As Henry Kissinger noted, “The period after 01815 was the first attempt in peacetime to organize the international order through a system of conferences, and the first explicit effort by the great powers to assert a right of control.”11 Thus was also born the modern practice of diplomacy and the organization of multilateral structures of sovereign states. Sovereignty, coupled with the right of self-determination, was central to the Treaty of Versailles at the end of World War I as well as Woodrow Wilson’s League of Nations, and was codified in the United Nations Charter of 01945.

    The principles of sovereignty have also been invoked to define the relationship between the state and private entities — in particular, corporations and businesses. The notion of corporate sovereignty is used to argue for limited government intervention in the market. Consequently, the concept of sovereignty is core to the logic of both international relations and political economy. 

    Critics of the primacy of national sovereignty, such as German feminist foreign policy advocate Kristina Lunz, argue that the concept of national sovereignty rests on the “notion of a homogeneous ethnic community (the ‘people’ or ‘nation’), which coincides with the territorial-legal government (the ‘state’). This leads to claims of absoluteness towards other states and intolerance of minorities.”12

    In the latter decades of the 20th century, important innovations in what can be termed “pooled sovereignty,” or “collaborative sovereignty” were devised to overcome some of the inherent limitations of individual states, especially with regard to their ability to influence economic, geopolitical, and environmental affairs. These include the European Union, comprising 27 member states who collectively manage a vast agenda of economic, social, and foreign policy matters; NATO, a collective security organization currently composed of 31 countries; and other regional organizations like the African Union (AU), the Association of Southeast Asian Nations (ASEAN), the Organization of American States (OAS), and the Pacific Island Forum (PIF).

    A Logic for the Future
    02016 Pulse of Europe Rally, Frankfurt. Photo by Raimond Spekking / CC BY-SA 4.0 (via Wikimedia Commons)
    The EU is perhaps the greatest single political achievement of the second half of the 20th century. 

    All of these are important venues for collaboration and collective decision making by nation-states, but the EU stands out as the most fully developed, most democratic, and most effective framework for the collective governance of key transnational domains. The EU was invented as a peace project following two devastating European wars, and it has successfully kept the peace among its members for 70 years. The goal of creating a wider European zone of peace, stability, and prosperity, as well as the appeal of EU membership, resulted in multiple waves of EU expansion, most notably the accession of 02004 when 10 countries, including seven former members of the Warsaw Pact, joined the EU. Today, the EU is a dynamic and productive single market and the second-largest economy (in nominal terms) after the United States. It is the world’s largest trader of manufactured goods and services and ranks first in both inbound and outbound foreign direct investment. In today’s multipolar world, the EU is a powerful node, often aligned with the United States but not unwilling to steer its own course, with China, for example. European politics are complex, but the structures and processes of the EU have proven to be remarkably effective at managing contentious issues and taking on difficult regulatory challenges, such as data protection and privacy and the establishment of an initial regime for the regulation of artificial intelligence (AI). The EU is perhaps the greatest single political achievement of the second half of the 20th century — as one French cabinet minister remarked, “We must recall that the EU is a daily miracle.”13

    With the adoption of “The Responsibility to Protect” (R2P) doctrine at the 02005 World Summit, global leaders advanced the new norm of tasking sovereign states with the responsibility of protecting their populations from “genocide, war crimes, ethnic cleansing and crimes against humanity.”14 When national governments are incapable or unwilling to do so, R2P authorizes collective action by the Security Council to protect populations under threat. This can include the use of force in cooperation — as appropriate — with relevant regional organizations. The adoption of R2P was a significant shift in conceptual thinking about sovereignty and non-interference. However, its application proved controversial in the case of Libya when the Security Council authorized action against dictator Muammar al-Qaddafi’s forces to prevent attacks on Libya’s civilian population in 02011. 

    National sovereignty, with further revisions, remains an important concept for the logic of the future. States will continue to be an essential nexus of governance and accountability to their citizens. Many states in Africa, Latin America, and Asia only recently achieved their sovereign independence from colonial rule — having fought for it for decades, they are not eager to give it up. Nevertheless, it is increasingly clear that individual states, as well as the multilateral institutions and processes in which they participate, are incapable of effectively addressing the urgent transnational and planetary challenges of our age.

    Pooled or collaborative sovereignty shows significant promise, but effective management of the age of turbulence will require institutions of shared sovereignty to adopt expanded democratic norms and processes (e.g., legitimacy, transparency, inclusive participation, and efficient decision making through qualified majorities) that achieve sufficient consensus among the participating states. However, it should be noted that collective approaches will expand only to the extent that the benefits of sharing sovereignty can be shown to clearly outweigh the reduction in national prerogatives and powers. In addition to sharing sovereignty, states will need to devolve power and authority to sub-national levels of governance (cities, regions, and communities) to address the consequences of global turbulence (whether from climate change, conflict, or migration) on local populations. Furthermore, the equitable distribution of critical resources — financial and otherwise — must accompany the delegation of authority to sub-national governments.

    Given the persistence of human rights violations and the loss of innocent lives caught in conflict zones, it may also be time to consider advancing the concept of human sovereignty to more fully achieve the aspirations of the 01948 Universal Declaration of Human Rights, such that “the inherent dignity and equal and inalienable rights of all members of the human family…[are understood to be]…the foundation of freedom, justice, and peace in the world.”15

    A Logic for the Future
    U.N. Human Rights Commission Chair Eleanor Roosevelt, 01949. Source: FDR Presidential Library & Museum
    A Logic for the Future

    Unsurprisingly, the longstanding reliance on national sovereignty has also reinforced the importance of national interest in the conduct of international relations. For many international relations theorists and practitioners, the logic of national interest is unassailable — legitimate governments are expected to respond to the needs of their citizens. Yet there are three fundamental challenges to a singular focus on national interest: The first challenge, of course, is when the self-defined national interests of one state or collection of states conflict with the interests of one or more other states. Interstate conflicts catalyzed the development of the precepts and practice of international law in the service of peaceful dispute resolution. However, as we have seen time and again, states (and non-state actors) all too frequently bypass dispute resolution mechanisms and resort to the use of force. The second challenge involves national leaders pursuing their interpretation of national interest without the democratic engagement of the public. Autocrats and dictators launch wars with little to no public debate or democratic oversight. A third — and growing — challenge to the primacy of national interest is the problem of the global commons: the global resources that sustain human civilization, such as the air we breathe, the water we drink, the sources of energy that power the global economy, and the international sea lanes that ensure the free transit of goods. A focus on national interests can impair equitable access to global public goods.  

    Like the related concept of sovereignty, national interest will continue to be an element of the logic of the future. In this century, however, the primacy of national interest must be diluted and greater attention focused on the global commons. The concept of “common but differentiated responsibilities,”16 formalized in the United Nations Framework Convention on Climate Change (UNFCCC) in 01992, provides an important model that can be applied in the broader context of international political, security, and economic relations. Harvard political scientist Joseph Nye reminds us of a theory promulgated by Charles Kindleberger, an architect of the 01948 Marshall Plan. Kindleberger argued that the international chaos of the 01930s resulted from the failure of the United States to provide global public goods after it replaced Great Britain as the largest global power.17 In the more diffused power realities of the 21st century, attending to the global commons must be a collective responsibility and priority. The realities of global interdependence and the singularity of Earth’s biosystem demand that states see their self-interest as inextricably linked to global interests. 

    Attending to the global commons must be a collective responsibility and priority.
    A Logic for the Future

    Ever since the Concert of Europe, international relations have been dominated by various configurations of great powers. The United Kingdom, France, Austro-Hungary, Germany, and Russia were the dominant powers from 01814 to 01914. America’s entry into WWI and Woodrow Wilson’s quest to “make the world safe for democracy” heralded the United States’ entry into the ranks of the great powers, while Japan and China gained greater recognition and influence in the inter-war period. In the aftermath of the Second World War, the United Nations Charter assigned global leadership responsibility to the five permanent members (P5) of the U.N. Security Council: the United States, the United Kingdom, France, the Soviet Union/Russia, and China. Both the League of Nations and the United Nations attempted to offset the concentration of power through the League Council and the United Nations General Assembly, respectively — bodies in which all member states were given equal voice and vote. Nevertheless, critical decisions of international relations, most importantly the authority for the use of force, continue to be the province of major powers. 

    Today, the concentration of power in the hands of a few states is being seriously challenged by much of the global community. “The uninhibited middle powers”18 like India, Turkey, Saudi Arabia, Brazil, South Africa, and Indonesia are less willing to follow the lead of the dominant powers and seek a greater voice in and increasing influence over global affairs. The age of turbulence and the challenges of the 21st century demand a new, more equitable distribution of power. Six and a half billion people,19 the “global majority,”20 must be more equitably incorporated into the management of global affairs in terms of both participation and outcomes. This will require revisions to the governance of key international institutions, starting with the U.N. Security Council as well as the international financial institutions (e.g., the World Bank, International Monetary Fund, and regional development banks).21 The goal must be to create an inclusive community of stakeholders who actively participate in and uphold the institutions and processes of global governance. In addition, power must be redistributed in both directions; it must be delegated to levels of governance closer to the people who are most directly impacted by particular conditions or issues (like climate change), while (new) international or “planetary” bodies must be given the responsibility of managing planetary challenges.22

    Six and a half billion people, the “global majority,” must be more equitably incorporated into the management of global affairs in terms of both participation and outcomes.
    A Logic for the Future

    The tenets of internationalism arose from the inter-state system of the 17th century. As historian Stephen Wertheim writes, this includes the belief “that the circulation of goods, ideas, and people would give expression to the harmony latent among civilized nations, preventing intense disputes from arising.”23 Drawing on the philosophical legacy of Hugo Grotius and others, this view has been codified in international law and is embedded in the institutions established to adjudicate and resolve political and economic disputes through arbitration, legal rulings, and other peaceful means. 

    Internationalism has been central to efforts designed to prevent outbreaks of armed conflict and the management of warfare when conflict prevention fails. The Concert of Europe (01814), the League of Nations (01920), the Kellogg–Briand Pact (01928), and numerous other international treaties and conventions were designed with the sole aim of maintaining the peace. The mission of the U.N. Security Council (01945) is to maintain international peace and security through the identification of “the existence of any threat to peace, breach of the peace, or act of aggression” and by making recommendations or determining “what measures shall be taken…to maintain or restore international peace and security.”24 The Geneva Conventions (01949)25 established the main elements of international humanitarian law to “limit the barbarity of war.”26 Despite the web of treaties, laws, and institutions, armed conflict and its barbarity persist, in part because state and non-state actors interpret international law in support of their own objectives or simply ignore it altogether. Structural constraints, like the veto power of the P5, also inhibit the efficacy of international law.

    A Logic for the Future
    Cargo ship approaches an international port in Turkey. Photo by bfk92 / iStock

    The precepts of internationalism have also been central to global economics and trade. Montesquieu’s notion that “peace is the natural effect of trade”27 has been at the heart of international economics for nearly 300 years. It is embedded in the mission of the World Trade Organization (WTO) and the robust web of bilateral and multilateral trade agreements that have accelerated globalization. However, with the potential return of great power confrontation, faith in Montesquieu’s optimistic view of the relationship between trade and peace has faded. As historian Adam Tooze writes, “Economic growth thus breeds not peace but the means to rivalry. Meanwhile, economic weakness generates vulnerability.”28 

    Twenty-first century internationalism will require new, innovative forms of dispute resolution and the consistent application of international law to all international actors. Reform of the U.N. Security Council is essential — even though it is unlikely given the provisions of the U.N. Charter. The victims of conflict must be given a greater voice in the quest for peace. Existing accountability mechanisms such as the International Court of Justice must be strengthened, and new enforcement powers should be considered. 

    A Logic for the Future
    United Nations Disengagement Observer Force in the Middle East (UNDOF). Photo by Yutaka Nagata / UN Photo

    The free movement of goods, services, people, and information should be expanded. However, we have learned that we cannot rely on economic relations alone to produce and sustain peace. As free trade arrangements are negotiated, greater emphasis must be focused on the concept of “equitable trade” that offers the benefits of economic intercourse while also protecting workers from abusive employment practices and safeguarding our fragile planetary ecosystem. Rules must be applied consistently to all parties. A new approach to global trade should help manage both the positive and negative effects of globalization in order to help bring greater economic benefits to developing economies while also ensuring equitable and efficient supply chains. 

    A Logic for the Future

    In the 80 years since the ratification of the United Nations Charter, the institutional framework of the international system has grown in scale and complexity. The five main bodies of the United Nations work closely with 15 “specialized agencies,”29 drawing more than 125,000 employees from 193 member states. Complementary institutions have been established outside the U.N. system to focus on specific issues, such as the International Water Management Institute, or in specific regions, such as the Arctic Council or the Organization of American States. This expansive but patchwork collection of international and multilateral institutions and organizations brings enormous benefits to global society — and yet, as is often true of bureaucratic systems, many of the institutions have grown unwieldy, inefficient, costly to maintain, and encumbered by political constraints. 

    A Logic for the Future
    U.N. International Day of the World's Indigenous Peoples, 02014. Photo by Paulo Filgueiras / UN Photo

    In the logic of the future, an ecosystem that complements institutions with networks, “mini-lateral” arrangements in which nations form coalitions to address common concerns or undertake time-limited missions, and perhaps most importantly, polylateral arrangements in which states, sub-national levels of government, private sector actors, and civil society join forces will prove to be more agile and effective at global problem solving. Indeed, the success of the 02015 Paris Climate Conference (COP 21) can be attributed to such a polylateral process, producing important commitments from all three major sectors: governments, businesses, and NGOs. Of particular note was the influence asserted by the “High Ambition Coalition,” a polylateral coalition organized by the Republic of the Marshall Islands (population approx. 43,000), one of the small states facing an existential threat from rising sea levels.

    In the future, agile and resilient decision making will be necessary for institutions to adapt to the variability and complexity in relations among nations. Many of the large organizations require governance and management reforms, and although the international system currently includes some number of non-institutional forms, they remain modest in scope compared to large bureaucratic structures.

    A Logic for the Future

    So-called international relations “realists” have argued that peace can be achieved and sustained only if it is fortified by the threat of military intervention. Henry Kissinger, a leading proponent of this view, outlined it as follows: “How is one to carry out diplomacy without the threat of force? Without this threat, there is no basis for negotiations.”30 It was this logic that led to the massive build-up of military forces and nuclear arsenals during the Cold War, at great economic and social cost, under the doctrine of “mutually assured destruction.” It also led to the growth of a “military–industrial complex”: The intertwining of industry, economic policy, and military expenditure in the United States (and elsewhere) that President Dwight Eisenhower warned against in 01961, with global military expenditures reaching $2.2 trillion in 02022.31 Nevertheless, 20 nations have shown that another path is possible: Costa Rica, Iceland, and the Solomon Islands, among others, do not have any standing armed forces or arms industry. Despite this, worldwide military expenditures continue to grow while investments in education (particularly for girls and women), skills training, infrastructure, clean energy, climate resilience, poverty alleviation, and a number of other social needs remain inadequate.32

    A Logic for the Future
    Child plays on captured Russian tank in Kyiv, 02022. Photo by Joe Carillet / iStock
    The logic of the future requires a shift from defining peace as the absence of war to embracing the concept of “positive peace”: the elimination of violence resulting from systemic conditions like hunger, poverty, inequality, racism, patriarchy, and other forms of social injustice.

    The logic of the future requires a shift from defining peace as the absence of war to embracing the concept of “positive peace”: the elimination of violence resulting from systemic conditions like hunger, poverty, inequality, racism, patriarchy, and other forms of social injustice. Research has shown that higher levels of positive peace are achieved when states have a well-functioning government, manage an equitable distribution of resources, create a strong business environment, develop high levels of human capital, facilitate the free flow of information, and have low levels of corruption.33

    History has shown us that there will always be bad actors, and military force will be required to confront armed aggression, genocide, and other mass violations of human rights. There is no response to Russia’s brutal war of aggression against Ukraine except for a short-term boost in military capacity. However, the logic of the future demands that we vastly strengthen diplomatic capacity, support equitable development, and invest in critical human needs as well as planetary sustainability. We must seek a future in which defense investments do not deter increased domestic social spending or international development aid that can build greater global social cohesion. As the United Nations High-Level Advisory Board on Effective Multilateralism (HLAB) so eloquently stated, “We must shift from focusing on mutually assured destruction to mutually assured survival.”34 As we seek to overcome drivers of conflict, we may devise new forms of alliance based on shared values instead of exclusively focusing on military defense. For example, alliances that support health equity, economic development, and girls’ education might help deter the eruption of violent conflict. 

    A Logic for the Future
    Afghan girl attends school in Canada. Photo by FatCamera / iStock
    A Logic for the Future

    Zero-sum logic has pervaded international relations in many periods of human history, most notably during the Cold War. The world was divided into two competing blocs led by the Soviet Union and the United States. Gains made by one bloc were seen as losses for the other, and countries in the developing world were pressured to take sides. 

    The Nonaligned Movement (NAM) emerged following the first-ever Asia–Africa conference, which took place in Bandung, Indonesia in 01955. Twenty-nine countries (home to 54 percent of the world’s population) participated in this conference in an effort to counterbalance and challenge the deepening East–West polarization in international affairs. The founders of the NAM — Yugoslavia’s Josip Broz Tito, India’s Jawaharlal Nehru, Egypt’s Gamal Abdel Nasser, Ghanian President Kwame Nkrumah, and President Sukarno of Indonesia — offered the developing world an alternative to the “us-versus-them” logic of the Cold War. Nevertheless, both the United States and the USSR attempted to pull the countries of the NAM into their orbits through enticement, coercion, or a combination of both. 

    Today, the war in Ukraine has revived the notion of nonalignment; some global majority countries (i.e., non-OECD countries with 80 percent of the world’s population) have refrained from joining the coalition supporting Ukraine. In January 02024, China successfully led an effort to expand membership of the BRICS — a loose organization of major developing countries that seek to expand their economic cooperation and political standing, in part as an effort to counterbalance perceived U.S.-led Western dominance.35 

    The logic of the future will seek to accommodate variable alignments and maximize positive-sum solutions to global problems. Questions of alignment will be viewed as dynamic rather than static. Countries that join together for one purpose may not collaborate on others, choosing from a menu of “limited-liability partnerships.”36 Writing in 02020, then-Afghan President Ashraf Ghani described a future of “multi-alignment.” Writing in The Financial Times three years later, Alec Russell termed this “the à la carte world.” Managing this dynamic environment will require an agile mindset and a greater tolerance for ambiguity from major powers like the United States. 

    India is an important case study. Speaking at the United Nations in 01948, Prime Minister Jawaharlal Nehru told the assembled world leaders: “The world is something bigger than Europe, and you will not solve your problems by thinking that the problems of the world are mainly European problems. There are vast tracts of the world which may not in the past, for a few generations, have taken much part in world affairs. But they are awake; their people are moving, and they have no intention whatever of being ignored or of being passed by.” Nehru later helped form the NAM amidst the polarization of the Cold War. Today, under the leadership of Prime Minister Narendra Modi and Minister for External Affairs Subrahmanyam Jaishankar, India has embraced dynamic alignment — working ambitiously to maintain close ties with Europe and the United States, while also continuing a fundamentally transactional relationship with Russia and avoiding conflict with China. This is a difficult balancing act with profound but potentially constructive implications for geopolitics in an age of turbulence. As Jaishankar explained to the Munich Security Conference in February 02024, “pulls and pressures make a unidimensional approach impossible.” 

    A Logic for the Future
    02023 BRICS Summit leaders. Photo by Ricardo Stuckert / PR

    In 01963, just months before his assassination, U.S. President John F. Kennedy gave a speech on world peace in which he urged Americans and Soviets to work together to “make the world safe for diversity”37 by accepting fundamental differences in ideologies and political systems, speaking out on points of principle and in defense of values, slowing the nuclear arms race, and engaging with each other through diplomacy to prevent war. Now, 60 years later, countries should accept the pluralism within the community of nations and forswear active efforts toward regime change as long as borders are respected and governments do not engage in gross violations of the human rights of their own citizens, as expressed in the R2P doctrine laid out in the 02005 World Summit Outcome Document. In a time of growing great power competition and an increased risk of conflict, Europe and the United States should work toward a détente with China, and even with a post-war Russia, if it renounces the use of force for territorial gain. 

    A Logic for the Future

    Closely related to the primacy of national interest, “strategic narcissism,” as described in 01978 by international relations theorist Hans Morgenthau, is the inability to see the world beyond the narrow viewpoint of one’s own national experience, perceptions, and self-interest.

    “Strategic empathy,” a concept advanced by former U.S. National Security Advisor and retired General H. R. McMaster, proposes a fundamental shift in the attitude and practice of diplomacy. It encourages deep listening in relations with others, seeking greater understanding of their views and needs, and investing less effort in persuasion. Consistent with strategic empathy, the logic of the future calls on great powers such as the United States to eschew hubris and conduct international relations with greater honesty and humility.

    A Logic for the Future

    Tragically, the modern international system evolved in large part through imperialism, colonial rule, and systems of racism and patriarchy that led to the brutal exploitation of non-White and female populations across the globe. Britain abolished the slave trade throughout its empire in 01807, yet slavery survived in the United States until the end of the U.S. Civil War and the ratification of the Thirteenth Amendment to the Constitution (01865). Patriarchy is deeply rooted in the history of human civilization — supported, in part, by the world’s major religious traditions. 

    Although World War I brought an end to the Austro-Hungarian and Ottoman empires as well as the Romanov dynasty, colonial rule continued in numerous Latin American, Caribbean, African, and Asian territories throughout the 20th century.38 Today, structural racism persists in many forms. Furthermore, the rights of women remain contested worldwide, and their general economic status and wellbeing continue to trail behind that of men — even in the most advanced economies. It is clear that the legacies of colonialism, racism, and patriarchy continue to shape the international system.

    The logic of the future must be based on universal human dignity, equality, pluralism, cosmopolitanism, tolerance, and justice.

    The logic of the future must be based on universal human dignity, equality, pluralism, cosmopolitanism, tolerance, and justice. The legacies of discrimination and exploitation continue to breed conflict, and genuine peace will not be achieved or sustained for as long as these legacies remain. The aspirations expressed in the Universal Declaration of Human Rights must be fully realized, and discrimination based on race, gender, sexual identity, religion, and physical ability must be eradicated. Advancing the concept of human sovereignty, which advocates for the recognition of the inherent worth of every human being, may help establish such new norms and eliminate colonial attitudes. 

    A Logic for the Future
    Protest against gender violence in Quito, Ecuador, 02017. Photo by Martin Jaramillo/ UN Women
    A Logic for the Future

    Starting in the 01970s, the neo-liberal school of economics gained widespread popularity among scholars, business leaders, politicians, and policymakers. The core tenets of neo-liberalism include minimal government intervention in the market, a singular focus on GDP growth as the de facto measure of progress, unfettered trade, and the exploitation of labor and natural resources. Neo-liberal economic policies in the United States, the United Kingdom, and elsewhere have also guided the management of the international economic system (i.e., the Bretton Woods institutions) over the last half-century.39 Although it can be argued that these policies have generated significant wealth, lifted hundreds of millions out of poverty, and spurred important innovations, it is clear that this approach has also contributed to widening economic inequality in many countries — and perhaps most importantly, its reliance on fossil fuels threatens the very viability of the planetary ecosystem. More bluntly, in the pursuit of neo-liberal economic policies, greed is rewarded, and the accumulation of material possessions is celebrated.

    The economic logic of the future should focus, first and foremost, on the wellbeing of both humans and the planet. Important theoretical and practical work is underway to advance the notion of the “wellbeing economy,”40 in which measurements of success are expanded to include social and environmental factors, the relationship between the state and the market is recalibrated, and attention is focused on an ethos of caring and sharing — caring for one another and the planet we share. Other important concepts such as the “circular economy,” “doughnut economics,”41 “productivism,”42 or “degrowth”43 are stepping stones in the path toward regenerative and genuinely sustainable development. A new mix of public and private institutions will be required to ensure accountability for the sustainable use and equitable distribution of resources consistent with a wellbeing economic paradigm. 

    A Logic for the Future
    Flooding in Haiti from Hurricane Sandy. Photo by Logan Abassi / UN Photo
    A Logic for the Future

    The history of human progress is entwined with the history of technological advancement, starting with the creation of stone tools 3.4 million years ago, followed by myriad other major technological milestones such as the invention of the wheel, the steam engine, the silicon chip, and so much more. With the notable exception of nuclear technology, technological advances have been embraced and employed with little or no restraint. Recent breakthroughs in machine learning and the accelerated development of AI, the profound advances in biotechnology and biomanufacturing, and the debate over the geo-engineering of Earth’s atmosphere to slow global warming all raise profound ethical questions and may even pose existential risks.

    In the logic of the future, we will need to negotiate global norms and regulatory regimes to advantageously but safely employ new technologies that have the power to greatly benefit planetary society but could also lead to great harm. AI technology will likely evolve faster than our ability to establish adequate regulatory regimes; consequently, restraint and self-regulation will also be necessary to ensure the safe deployment of this powerful technology.

    A Logic for the Future
    Robopsychology Lab at Johannes Kepler University Linz. Photo by Martin Hieslmair / Ars Electronica

    Part II. Building Blocks of a New Global Framework

    The logic of the future demands significant modifications and additions to the existing international system. From a review of many suggestions and recommendations that have been offered by numerous analysts, commissions, and advisory panels, 10 “building blocks” emerge. Under each of the 10 points that follow, some illustrative examples of specific steps that might be taken are highlighted, although these are neither comprehensive nor fully developed here.

    1. Cocreate the International System of the Future

    As the world’s most powerful country, the United States should work with the U.N. secretary-general, Europe, and other important global major powers to launch an inclusive process to design a more equitable and effective distribution of power and a new global system. Most of the peoples of the world still count on the United States for global leadership, recalling its role in the creation of the existing international order: Franklin Roosevelt’s “Four Freedoms” of January 01941; the Atlantic Charter principles that Roosevelt and Winston Churchill articulated later that same year; and the 01944 international conference held at Dumbarton Oaks, which advanced the vision of a post-war international organization to maintain global peace and security and formed the basis for the United Nations Charter adopted in San Francisco in 01945. Creation of the United Nations was an act of both imagination and political will, and U.S. presidential leadership was essential to the success of these efforts. 

    The collapse of the Soviet Union and the end of the Cold War in 01991 brought echoes of post-WWII 01945 and an opportunity to create a new, more inclusive international order — but this opportunity was missed through a “failure of creativity.”44 The world had changed dramatically and yet the impulse was to affirm the prevailing international relations logic and expand the existing institutional framework rather than devise new norms and structures suited to new circumstances. 

    In some ways, we are now experiencing another 01945-like moment. The existing international order has broken down amidst significant global turbulence and multiple existential threats. As in 01945, there is once more an evident need for the community of nations to work collectively to build the international system of the future.

    There are, however, significant differences between 01945 and the present day. After the war much of the world was in ruins, economies were devastated, and the United States was the undisputed hegemon. The United Nations was founded in the aftermath to prevent the outbreak of another catastrophic world war; the challenge today is to construct a new international system that can preempt the existential threats we will face in the decades ahead. The United States retains its capacity for vitally important leadership, but it is no longer a hegemon in today’s multipolar world. The realignment of global power, heralded by the rise of the global majority, mandates that any future system must incorporate their perspectives, needs, and aspirations far more equitably than before. Consequently, the legacy major powers must invite the countries of the global majority to cocreate the international framework of the future.

    2. Remake the United Nations

    The United Nations remains the essential institutional framework for cooperation among sovereign states, and it contributes enormously to the global common good. But like a magnificent old house, the United Nations needs major renovations. Most of the needed renovations are well known. These include making the U.N. more democratic by expanding the number of permanent Security Council members and amending the veto privilege (perhaps requiring three members to jointly exercise vetoes) or by empowering the General Assembly to override vetoes with the support of two-thirds or three-quarters of the member states. To amplify the voices of the world’s peoples,45 there should be a U.N. Under-Secretary for Civil Society to facilitate deeper engagement by global civil society in the work of the U.N. system. To expand the United Nations’ capacity for anticipating future developments and protecting the rights of future generations, Secretary-General Antonio Guterres has announced his intention to appoint an Envoy for the Future, an important step toward incorporating long-term thinking into present decision making. 

    A Logic for the Future
    FDR and Winston Churchill at the Atlantic Conference, 01941. Photo by Priest, L C (Lt), Royal Navy official photographer (via Wikimedia Commons)

    Article 99 of the U.N. Charter empowers the secretary-general to “bring to the attention of the Security Council any matter which in his opinion may threaten the maintenance of international peace and security,” yet this authority has been invoked only four times since 01946. 

    Secretary-General Guterres was right to invoke Article 99 in his letter to the Security Council on December 6, 02023, responding to the war in Gaza and urging the international community to “use all its influence to prevent further escalation and end this crisis.” In the future, this powerful yet rarely used tool should be employed judiciously — but without hesitation — when threats to peace and security demand international action. 

    It is also time to redesign other U.N. bodies and mechanisms, starting with the UNFCCC and the annual Conferences of the Parties (COPs), which have brought together the nations of the world to address the climate crisis since the first COP in Berlin in 01995. At the very least, the requirement for unanimous decision making should be replaced with qualified majority voting so that individual states or small blocs can no longer block progress.46 In addition, enforcement mechanisms should be established to hold countries accountable for meeting their emissions reduction pledges. In the absence of formal accountability mechanisms, civil society should be adequately funded to monitor progress and publicize failures to meet obligations. 

    It may also be time to replace the anachronistic Trusteeship Council, one of the six principal bodies of the United Nations, which was established to manage transitions to self-government or statehood for territories detached from other countries as a result of war. The last territory to achieve statehood through the Trusteeship Council process was Palau in December 01994 — three decades ago. Given the critical importance of avoiding climate catastrophe, it may be prudent for the Trusteeship Council to be replaced by a Climate Council that would incorporate, elevate, and strengthen the UNFCCC and its COPs and serve as a forum for implementation of agreed climate policies and actions. Alternatively, the Trusteeship Council could be replaced by a body representing subnational levels of government (see section 3 below).

    A Logic for the Future
    U.N. officer plays with a child at a South Sudan protection site. Photo by JC McIlwaine / UN Photo

    Some renovations of the U.N. system can be achieved through General Assembly resolutions, but many of the most important reforms (namely, the expansion of the permanent members of the Security Council or amendments to the veto provision) require Charter amendments that can be accomplished only with a two-thirds vote of the General Assembly and ratification by two-thirds of member state parliaments, after which they must avoid a veto by any of the P5. Given such structural limitations on any attempt to truly remake the United Nations, it is necessary to build an effective ecosystem of institutions, networks, and polylateral alliances that complements the United Nations and compensates for its structural limitations. 

    It is necessary to build an effective ecosystem of institutions, networks, and polylateral alliances that complements the United Nations and compensates for its structural limitations. 
    A Logic for the Future
    Adoption of the Paris Agreement, 02015. Source: UNclimatechange on Flickr

    3. Supplement the United Nations

    The international system of the future will continue to have the United Nations at its core, but the complexity and hazards of these turbulent times demand that we establish a more robust, flexible, and nimble ecosystem of networks, organizations, and modalities that work in concert with the United Nations, but with fewer bureaucratic constraints and procedural impediments to action. The High-Level Advisory Board (HLAB), appointed by the U.N. secretary-general, has declared that “global governance must evolve into a less hierarchical, more networked system wherein decision-making is distributed, and where the efforts of a large number of different actors are harnessed towards a collective mission.”47 A few examples of ways to supplement the United Nations and create a more dynamic and effective international ecosystem follow.

    First, it is important to recognize and strengthen regional intergovernmental organizations that have achieved sufficient democratic legitimacy as well as efficacy in one or more of the following domains: conflict prevention and peacebuilding, economic cooperation, and environmental management. Capacity-building support can enhance the effectiveness of regional organizations, and formal relationships with relevant U.N. bodies can strengthen the coordination of regional efforts. Special attention should be focused on regions where intergovernmental bodies are underdeveloped or non-existent. 

    In the domain of international peace and security, it is critical to start planning a new European security architecture for the political landscape following the Russia-Ukraine War. Because Russia will remain a major European power — regardless of the outcome of that conflict — NATO, the Organization for Security and Co-operation in Europe (OSCE), and the EU should coordinate their plans for a collective security structure that can enhance security across the European continent, including Russia (if and when it permanently renounces the use of force against its neighbors). 

    The G20, a body that brings together leaders of 19 of the largest economies, plus the heads of the EU and the African Union — together representing 80 percent of the world’s population and almost 85 percent of global GDP — is an important venue for discussions among the world’s most powerful leaders and could be an even more important asset. Could it focus more specifically on a few key topics requiring collective management, such as climate change, pandemic response, debt, and development finance? Could a formal relationship with the U.N. Security Council help bring additional voices to the peace and security agenda?

    Subnational units of government (e.g., cities, states, and provinces) are increasingly important in the age of turbulence; the United Nations estimates that by 02030, one-third of the world’s people will live in cities with populations of 500,000 or more. Subnational units of government are increasingly finding themselves responsible for managing the consequences of global turbulence, be they the impacts of accelerating climate change, the spread of infectious disease, or the mass movement of people. Citizens often turn to local leaders for solutions to the consequences of these global phenomena in their daily lives. Although there are numerous international fora where subnational leaders meet, it is time to formalize the connections between subnational governments and the international system. As noted in section 2, one possibility would be to replace the U.N. Trusteeship Council with an Intergovernmental Council that offers rotating membership to subnational units of government (e.g., cities, states, provinces) and that, like the Trusteeship Council, answers to the General Assembly.

    A Logic for the Future
    People receive the COVID-19 vaccine in New York in April 02021. Photo by Liao Pan / China News Service via Getty Images

    Two related 21st-century challenges demand new polylateral mechanisms for establishing norms and developing global regulatory regimes: (1) the decentralized information ecosystem enabled by social media and (2) the advent of generative AI. Efforts are already underway to create an Intergovernmental Panel on the Information Environment (IPIE) modeled on the Intergovernmental Panel on Climate Change (IPCC). Like the IPCC, the IPIE would be “an international scientific body entrusted with the stewardship of our global information environment for the good of mankind.”48 The IPIE would gather and analyze data, monitor trends, and issue recommendations to combat disinformation and misinformation, hate speech, and algorithmic manipulation that undermine trust, fuel conflict, and impede progress in managing social problems. After all, access to reliable information is essential for healthy democracies. 

    Continued advances in AI will only exacerbate the societal risks of misinformation and disinformation, but the power and implications of AI extend well beyond the information ecosystem and can affect every domain of human activity. These new technologies can help alleviate human suffering, increase workplace productivity, support invention and scientific breakthroughs, and more; however, as many technologists are warning, AI also has the potential to threaten the primacy of human intelligence, to become “God-like” (in the words of tech investor Ian Hogarth), and possibly “usher in the obsolescence or destruction of the human race.”49 Although the proposed IPIE organization would help with gathering and reporting reliable scientific information about the advancements in AI, a more powerful global regulatory body is needed. 

    The international response to the advent of nuclear energy offers valuable lessons that can inform our management of high-value, high-risk future technologies. The very first resolution adopted by the U.N. General Assembly in 01946 established the U.N. Atomic Energy Commission, which was followed a decade later by the establishment of the International Atomic Energy Agency (IAEA). Furthermore, the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) came into force in 01968, giving the IAEA authority to conduct on-site inspections to ensure that nuclear materials are used for peaceful purposes. The NPT regime and the diligent oversight provided by the IAEA have allowed for significant advances in the peaceful use of nuclear energy — including the operation of some 450 nuclear reactors worldwide — while limiting nuclear weapons to only eight declared states.50

    The future international system must include robust mechanisms to enforce international law and combat the current culture of impunity. In a few areas, new institutions are needed; for example, there is a campaign to establish an International Anti-Corruption Court (IACC) that would prosecute alleged corruption by a state’s leaders when national judiciaries are unable or unwilling to act due to political interference or lack of judicial capacity. Thus far, 190 countries have ratified the U.N. Convention Against Corruption, but many cases still go unprosecuted. The proposed IACC would help fill this critical enforcement gap. 

    A Logic for the Future
    New York City Car-Free Streets Earth Day Celebration. Photo by New York City Department of Transportation

    4. Improve, Supersede, and Devolve the Nation-state

    Nation-states will remain important in the international system of the future, but the COVID-19 pandemic and the climate crisis have highlighted the inadequacies of nation-states regarding governance at both the local and planetary levels.51 Managing the myriad of 21st-century challenges will require the devolution of greater authority (as well as the distribution of necessary resources) to subnational and local levels of government, allowing them to respond to the impacts of global phenomena on their populations. 

    At the same time, some issues require planetary action, such as global decarbonization, vaccine manufacturing and distribution, and the regulation of certain high-risk technologies like AI or biotechnology. The principle of subsidiarity, which posits that social and political issues should be addressed at the most immediate level of governance consistent with their effective resolution, offers increasingly relevant guidance when addressing the challenges of the 21st century. 

    It will be extremely challenging to reduce the primacy of the nation-state in international affairs. There must be a fundamental shift in our mindset and ways of understanding the world that have shaped international relations for centuries, as well as new legal and institutional arrangements. A great deal of ideation, discussion, and debate will also be necessary. However, surviving the existential threats inherent in our turbulent age necessitates the undertaking of these efforts. In this regard, the EU, as a structure of collaborative sovereignty that shifted European thinking and governance away from an exclusive reliance on the nation-state, provides an important model. 

    5. Train, Recruit, and Deploy a New Generation of Diplomats

    In the remaining decades of the 21st century, diplomacy must be the core operating system employed to lead the global community toward enduring peace, more equitably shared prosperity, and a sustainable planet. This requires substantial investments in a global diplomatic surge — recruiting, training, and deploying a new generation of diplomats who can advance the logic of the future and the practice of cooperative global problem-solving. The “millennial” and “Gen Z” populations across the world can provide the field of diplomacy with a talented cohort of highly educated, cosmopolitan, and culturally sophisticated women and men.

    To build this new corps of national and international diplomats, a distinguished multinational panel of scholars and practitioners should be tasked with developing a global diplomacy curriculum consistent with the logic of the future. This could then be taught at the United Nations University and adopted by other diplomacy and international relations graduate programs worldwide. A virtual diplomacy institute could offer this curriculum in multiple languages through an online platform. 

    A Logic for the Future
    Refugees cross from Turkey to Greece. Photo by Joel Carillet

    6. Trade and Investment to Provide Global Public Goods

    Consistent with the goals of a more equitable distribution of power in global affairs, the World Trade Organization and Bretton Woods Institutions require significant reforms in terms of their mission, governance, and capitalization. Although many credible reform ideas have been discussed, with some progress made in recent years, debate still swirls around the most fundamental reforms. The institutions of international economics must be focused on promoting equity across developing economies and providing incentives, financing, and technical assistance in the delivery of global public goods such as clean air and water, food, and health care. 

    One particular reform in the global trade regime merits special attention: the elimination of the Investor-State Dispute Settlement Process (ISDS), which is a common provision in free-trade agreements. The ISDS allows foreign companies to sue governments for relief from national policies that they claim impair their ability to make reasonable profits — including climate regulations, financial stability measures, and labor and public health policies. 

    The Biden administration took some constructive steps toward a more equitable global trade system, describing it as a “post-colonial trade system.” In an important speech at the Brookings Institution in April 02023, National Security Advisor Jake Sullivan described the Biden administration’s approach: “…working with so many other WTO members to reform the multilateral trading system so that it benefits workers, accommodates legitimate national security interests, and confronts pressing issues that aren’t fully embedded in the current WTO framework, like sustainable development and the clean-energy transition.”

    7. Strengthen Democracy 

    Effective democratic governance must be the cornerstone of the international system of the future. Democratic norms, processes, and institutions give voice to “the peoples of the United Nations,” as expressed in the first lines of the U.N. Charter. Democracy facilitates the identification of common ground and requires compromise; it recognizes differences, promotes fairness and equity, and improves transparency and accountability — qualities that are essential to peaceful international relations. Democratic states are less likely to go to war with one another and, compared to states under autocratic rule, are also less likely to suffer violent internal conflicts.52 

    Nonetheless, democracy requires substantial reinvention and expanded application if humanity hopes to create a more peaceful, equitable, and sustainable world in this century. Democracy must be made more inclusive, more fully representative, more participatory, and more effective. This mission may be more urgent now than ever before: As faith in democratic governance weakens, neo-authoritarians and demagogues around the globe are rushing to consolidate their power. 

    Political scientist Larry Diamond coined the term “democratic recession” in 02015 to describe the global decline in the quality and efficacy of democratic governance over the previous decade. Drawing on data reported by Freedom House, the Economist Intelligence Unit, and V-Dem,53 Diamond (and many others) have documented democratic backsliding and the rise of authoritarianism in all four corners of the globe. At this very moment, electoral authoritarians — Nayib Bukele in El Salvador, Victor Orban in Hungary, and Yoweri Museweni in Uganda, to name a few — are eroding the rule of law, restricting the freedom of speech and media, curbing civil society, and trampling on citizens’ rights. 

    A Logic for the Future
    Crowd celebrates after Lula wins presidency of Brazil, 02022. Photo by Oliver Kornblihtt / Mídia NINJA

    That said, the news is not all bad. In Poland, after years of deepening authoritarian rule under the Law and Justice Party, voters turned out in overwhelming numbers to elect Donald Tusk’s Civic Platform coalition in October 02023. Tusk has since set out to restore the rule of law, media independence, and civil rights — but the task of restoring democracy is proving formidable after eight years in which both norms and institutions were seriously eroded. As the German Marshall Fund’s Michal Baranowski observes, “There will be lessons for other countries to draw from Poland — both on what to do and not to do — but Tusk has the disadvantage of being the first, trying to clean up without a detox handbook.”54

    Fundamental reforms are needed even in well-established democratic nation-states, the United States being first and foremost among them. In June 02020, a national commission organized by the American Academy of Arts and Sciences offered a comprehensive blueprint of proposed reforms in a report titled Our Common Purpose: Reinventing American Democracy for the 21st Century.55 The report’s 31 recommendations address significant reforms to political institutions and processes, as well as the need for reliable, widely shared civic information and a healthy political culture. The years ahead may test the resilience of America’s democratic institutions and the rule of law. Political leaders, civil society organizations, and citizens must be prepared to defend democratic norms and constitutional arrangements. 

    Reforms are needed in democracies around the globe to address similar weaknesses while respecting distinct cultural and historical contexts. One size most certainly does not fit all, but central to all these efforts is the need to fortify the role of citizens as the primary stakeholders in self-government. The will of the citizenry is the ultimate accountability mechanism in democracies; to defend against the rise of autocracy, we must concurrently strengthen the institutional and procedural checks and balances that safeguard the rule of law and protect independent journalism. 

    Democracy must also be strengthened and extended in the institutions and mechanisms of global governance. Increasingly, decisions of material significance are being made by international bodies far removed from the citizens those decisions will affect. The international system of the future must incorporate more robust democratic norms, characteristics, and processes to make it more participatory, inclusive, transparent, accountable, and effective. Surviving the existential threats of the age of turbulence will require difficult decisions with monumental consequences The OECD has documented an encouraging “deliberative wave” of “representative deliberative processes,” such as citizens’ assemblies, juries, and panels that has been steadily gaining momentum since 02010.56 As former U.K. diplomat Carne Ross has argued, we must build on this wave and establish “consent mechanisms for profound change” in global policy for conflict resolution, development finance, economics, trade, and energy to meet the global challenges ahead.

    A Logic for the Future
    Vice President Biden and Chinese President Xi share a toast, 02015. Photo by U.S. State Department

    8. Establish a U.S.-China Secretariat

    As many experts have observed, the U.S.-China relationship is the most important bilateral relationship of the 21st century. This relationship must be managed with clear-eyed, consistent, and continuous care, as well as effective communication and creativity. As Harvard professor and former Pentagon official Joseph Nye observed, “For better or worse, the U.S. is locked in a ‘cooperative rivalry with China.’”57 Our economies are closely intertwined; we are the two largest greenhouse gas emitters; we both have strategic interests in the Indo-Pacific; and the island of Taiwan is a potential flashpoint for a great power confrontation. Analogies to the U.S.-USSR Cold War rivalry are commonly invoked, but these comparisons overlook critical differences and lead to misguided policy prescriptions. The best approach to avoiding conflict necessitates the combination and effective management of competition and cooperation. It is not an exaggeration to say that as U.S.–China relations go, so goes the 21st century. 

    U.S.-China relations ebbed in the first half of 02023, with the year beginning with the Chinese surveillance balloon incident followed by military provocations in the South China Sea. High-level contact between the two governments was revived when Secretary of State Antony Blinken visited Beijing in June. This was followed by several other high-profile visits, including Treasury Secretary Janet Yellen traveling to China in July; Commerce Secretary Gina Raimondo following suit in August; and Chinese Foreign Minister Wang Yi meeting President Biden in the White House, setting the stage for Biden and Xi Jinping to meet during the APEC Summit in California on November 15, 02023. 

    Episodic meetings of high-level officials, including presidential summits, are essential but insufficient for managing this complex, high-risk relationship; more intensive and continuous joint engagement is required. One idea worth exploring is the establishment of a U.S.–China Joint Secretariat58 in a neutral location, perhaps Singapore or Geneva, to which senior civil servants from key ministries in both countries are seconded to work side-by-side on a daily basis. These officials would be tasked with exploring key issues in the bilateral relationship; developing a deeper understanding of each other’s views, needs, and redlines; and devising creative solutions that could then be shared with Beijing and Washington. 

    This idea will no doubt be unpopular with other countries in the Indo-Pacific, most notably India. Nevertheless, through careful diplomacy, it should be possible to help the Indians and others to see that non-confrontational and constructive U.S.-China relations are in their best interests.

    9. Codify Rights of Nature and Rights of Future Generations

    There is fascinating and important work being done in think tanks, academic institutions, and movements to develop eco-jurisprudence that expands the protection of rights beyond those accorded to human life and establishes human responsibility to other forms of life on our planet. Through pathbreaking leadership, Ecuador became the first country to enshrine the rights of nature in its constitution in 02008, and the first legal suit filed on behalf of nature was a case involving threats to the Vilcambaba River: The court found for the river. 

    A Logic for the Future
    Ecuador banned drilling in Yasuní National Park, 02023. Photo by Antonella Carrasco / openDemocracy

    Significant progress has been made to establish the rights of future generations, with climate-related lawsuits being brought before courts across the globe on behalf of children. One suit filed in 02015, Juliana v. United States, asserted that “through the government’s affirmative actions that cause climate change, it has violated the youngest generation’s constitutional rights to life, liberty, and property, as well as failed to protect essential public trust resources.”59 In June 02023, U.S. District Court Judge Ann Aiken ruled that the case, brought by 21 young plaintiffs, could proceed to trial. In August 02023, a group of young people in Montana won a landmark ruling that the state’s failure to consider climate change when approving fossil fuel projects was unconstitutional. Similar suits are pending in several other U.S. states, and in September 02023, a suit brought by six young Portuguese citizens was heard before the European Court of Human Rights. Active cases filed on behalf of children and youth are pending in Canada, Mexico, Pakistan, and Uganda.

    Establishing the rights of nature and future generations offers a promising avenue for implementing the logic of the future. Secretary-General Guterres’s pledge to name a Special Envoy for the Future also marks an important recognition that the international system must address long-term challenges and focus on prevention along with mitigation and crisis response.

    10. Transformed U.S. Global Leadership60

    Given its vast wealth, hard and soft power, presumption of moral leadership, and disproportionate consumption of finite global resources, the United States must play a leading role in shaping the global response to the age of turbulence. Without U.S. leadership, it would be impossible to embrace the logic of the future and build the international system needed to address the challenges of the 21st century. But the realities of this interdependent world require fundamental changes in the style and content of U.S. global leadership. We need a bold and fundamentally different vision of America’s role in the world.

    A new vision of America’s global role must rest on a set of core principles for constructive, collaborative, results-oriented, and ethical leadership:

    First, the United States must recognize that efforts to maintain its global primacy will prove fruitless and not in its national interest. If there was a “unipolar moment” at the end of the Cold War, it was both fleeting and deluding. Given the rapidly redistribution of political, economic, and military power already underway when the Soviet Union dissolved in 01991, we should have seen past the triumphant glow and come to grips with a more sober view of a world with multiple nodes and diverse forms of power. Basking in that temporary surge of American supremacy, we failed to adopt a vision of collaborative global leadership in which the United States plays an essential, but not dominant, leadership role. It is imperative that we do so now.

    On a relative basis, U.S. military and economic power, though still vast, is shrinking. Perhaps more importantly, our “soft power” (the power of our values, cultural vitality, capacity for scientific and technological innovation, and our leadership by example) has declined. Even among our allies, the United States is often seen as arrogant, greedy, too quick to use military force, and hypocritical. We are seen to support the “rules-based liberal international order” as long as we get to make the rules and enforce the order. Such efforts to assert global primacy breed particular resentment among the very diverse countries that compose the global majority. 

    Although our priority will be the security and prosperity of the United States, Americans must pursue our national interests with an understanding that, in an interdependent world, our wellbeing is directly tied to peaceful and prosperous conditions elsewhere and to the fate of the planet. Our national goals can be achieved only in concert with others and by forging common ground to generate collective benefits. Rather than striving to preserve our status as the world’s only superpower, the United States should use its great-power status to lead the community of nations in an urgent process of developing a new global system that relies on the coordination and collaboration of multiple centers of power and authority. Humility and honesty are essential: We must engage with “strategic empathy.” I do not underestimate how challenging it will be to transform the role of the United States in the world, especially given the deep divisions in U.S. domestic politics and their influence on our foreign policy. And yet, it is of critical importance that we do so.

    Second, the United States must build strength through teamwork. The freer and faster global movement of people, information, goods, money, disease, pollution, and conflict breeds a host of challenges that no single country — not even a superpower — can surmount alone: Only persistent teamwork can deal effectively with the agenda of pressing global issues. The United States must become the indispensable partner in global affairs.

    Third, the United States must develop and use a full range of tools. We must be ready to use military force when absolutely necessary to protect the homeland, to confront other urgent threats to peace and security, or to prevent genocide or other overt abuses of human rights. However, we must give priority to other tools — diplomacy chief among them — that can offer effective alternatives to military action. Larger investments in development assistance, designed with foresight and in partnership with credible local leadership, are also essential, both in post-conflict reconstruction and to ameliorate conditions that can breed conflict in the first place.

    Fourth, when circumstances warrant consideration of military action, the United States must comply with our obligations under the U.N. Charter, deploy forces only when we are confident that we are unlikely to do harm and, conversely, assess that we are well positioned to contribute to positive outcomes. Americans must finally learn from the lessons of Vietnam, the Balkans, Iraq, and Afghanistan: The use of military force without a deep understanding of the specific political, cultural, regional, and geo-strategic context and a plan for creating conditions for durable peace leads to miscalculation, prolonged engagements, excessive costs in lives and resources, and unmet objectives.

    A Logic for the Future
    Palace of European headquarters of United Nations, Geneva. Photo by Michael Derrer Fuchs / iStock

    Fifth, the United States should promote fair play. America earns credibility and respect when it bases its actions on its core values. The combination of esteem and tangible support is essential to keeping old friends, winning new ones, forming effective coalitions, and averting resentment and misunderstanding. Furthermore, to advance shared norms, human rights, and the rule of law as the basis for global stability and progress, America itself must play by the rules — whether in the design of trade policies, the judicious deployment of our military, the incarceration and interrogation of prisoners of war, or the use of global environmental resources. 

    We are living in a complex and dangerous world. The new test for a superpower is how well it cares for global interests. It is time for a new vision of America’s role in the world based on an understanding that what is good for the world is good for us. 


    Conclusion

    The decades ahead will bring change, uncertainty, and peril in global affairs, especially if humanity and its leaders fail to adapt. Populations around the globe are suffering from the increasingly destructive and deadly effects of climate change, which in turn fuel unprecedented levels of mass migration, social upheaval, and competition for resources. Once again, wars rage in Europe and the Middle East, while China acts on its increasingly expansive power aspirations, triggering new global tensions. Early signs suggest that AI could either save humanity or doom it. Norms of social trust are in decline, the truth is elusive, and political polarization impedes dialogue, compromise, and progress toward solutions. 

    All of these trends — environmental, demographic, geostrategic, technological, political, and institutional — represent grave challenges for old assumptions and existing frameworks. The international system is under stress and in flux. The old order is dying, and a new order is demanding to be born. Indeed, this inescapable need for renewal creates an opportunity for inspiration and invention. We are in a period of elasticity, a time when there is greater capacity for stretch in our conceptions of global relations and thinking about the international system. We must act now to guide the global community toward a more peaceful, equitable, and sustainable future. 

    Our legacy must not be one of inattention to the rising tides of crisis. Our children deserve to inherit a world structured with a logic that is relevant to their futures. The world itself deserves a logical framework that builds on the history of human progress yet recognizes and eliminates inherent flaws and anachronisms so that we may effectively confront the challenges ahead. We and our planet deserve a sustainable future.

    No one can approach this task without understanding why our world has clung to the old order. Beneficiaries of the status quo have every immediate incentive to undermine progress. Competing national interests and aspirations impede transformative thinking, and domestic politics constrain even those states that see the need for, and wish to participate in, the renewal efforts. Economic competition overrides political cooperation. And structural flaws, like those embedded in the U.N. Charter, pose formidable barriers to reform.

    In spite of such hurdles, the Pact for the Future adopted at the U.N. Summit of the Future in September 02024 represents an important milestone. The Pact commits the international community to a series of actions that, if fully realized, would meaningfully contribute to a new logic of multipolar pluralism, a more equitable distribution of power, and planetary sustainability. Global civil society must now mobilize to hold the U.N. member states to their commitments and to increase the ambition of implementation and follow-up actions. The Summit of the Future must be the starting point of an ongoing process, not a one-off event with limited impact. We must follow a new logic, advance a new ethos of caring and sharing, and construct a new institutional ecosystem to ensure that the age of turbulence does not become the age of catastrophe.

    Notes

    1. International Institute for Strategic Studies.

    2. Rebecca Falconer, “In photos: Extreme heat strikes multi continents,” Axios July 23, 02023. https://www.axios.com/2023/07/18/photos-extreme-heat-europe-us-asia .

    3. Bill Burns, 59th Ditchley Annual Lecture (RvW Fellowship, Global Order), July 1, 02023. See also Gaia Vince, “Nomad Century,” 2022.

    4. The Science and Security Board of the Bulletin of the Atomic Scientists moved the hands of the Doomsday Clock forward to 90 seconds to midnight, largely because of the threats of nuclear use by Russia in the Ukraine war but also recognizing the prospect of the new nuclear arms race: “the closest to global catastrophe it has ever been,” January 24, 02023.

    5. Generative artificial intelligence describes algorithms that are currently being used to create new content. General artificial intelligence is a theoretical concept in which future algorithms could replicate any intellectual task that humans can perform.

    6. Bill Burns, op. cit.

    7. Roger Cohen, “Russia’s War Could Make It India’s World,” New York Times, December 31, 02022, https://www.nytimes.com/2022/12/31/world/asia/india-ukraine-russia.html .

    8. Hannah Arendt, Crises of the Republic: Lying in Politics; Civil Disobedience; On Violence; Thoughts on Politics and Revolution (Houghton Mifflin Harcourt, 01972), 15.

    9. See Carrington, “The Economics of Biodiversity review: what are the recommendations?” The Guardian, February 2, 02021 and Dasgupta “The Economics of Biodiversity,” U.K. government, July 02021.

    10. The Earth Charter, June 29, 02000.

    11. Kissinger, A World Restored, 219.

    12. Kristina Lunz, The Future of Foreign Policy is Feminist, Polity Press, 02023. p. 31.

    13. Clement Beaune, French transportation minister and Macron protege, The New York Times, September 1, 02023.

    14. 02005 World Summit Outcome Document, paragraph 138.

    15. From the preamble of the UDHR.

    16. In the context of global warming and biodiversity loss, the Common But Differentiated Responsibilities principle (CBDR) recognizes that “[i]n view of the different contributions to global environmental degradation, States have common but differentiated responsibilities” (Principle 7 of the Rio Earth Summit Declaration, 01992).

    17. Charles P. Kindleberger, The World in Depression 01929-01939, (University of California Press, 01973).

    18. Ambassador (ret.) Michel Duclos, Institut Montaigne.

    19. This calculation uses the combined populations of the OECD countries (1.38 billion in 02022) as a proxy for the “Global North” and subtracts this from total 02022 global population of 7.95 billion, yielding 6.5 billion.

    20. I offer the term “global majority” as an alternative to “Global South” to acknowledge the peoples of the countries commonly identified in the Global South make up approximately 82 percent of the world’s population and the majority of them live north of the equator, not in the “south.”

    21. Climate change is a stark example. Countries representing the global majority are disproportionately experiencing the devastating consequences of a rapidly heating planet while having contributed very little to the emission of climate-altering greenhouse gases. They are also in desperate need of debt relief and equitable financing for investments in sustainable development and climate resilience. At the Paris Climate Conference (COP 21), wealthy countries affirmed a commitment to provide $100 billion per year by 02025 for climate action in developing countries. In 02020, the amount of funds mobilized totaled approximately $83 billion — and given the acceleration of the climate crisis, funding needs are significantly outpacing the financial support provided.

    22. See Blake and Gilman, “Governing in the Planetary Age,” Noema, March 9, 02021.

    23. Wertheim, Tomorrow the World, 1.

    24. Article 39, U.N. Charter.

    25. Additional protocols were adopted in 01977 and 02005.

    26. International Committee of the Red Cross.

    27. Montesquieu, The Spirit of Laws, 01748.

    28. Adam Tooze, “02023 Shows that Economic Growth Does Not Always Breed Peace,” Financial Times, December 22, 02023.

    29. Including the Food and Agriculture Organization, the International Labor Organization, the International Monetary Fund, the World Health Organization, and the World Bank.

    30. Lunz, op. cit., p. 63.

    31. Stockholm International Peace Research Institute (SIPRI).

    32. For example, the U.N. World Food Program, which strives to assist a record 345 million people worldwide facing food shortages in 02023, currently confronts an estimated shortfall of $15.1 billion. See also https://disarmament.unoda.org/wmd/nuclear/tpnw/ .

    33. Institute for Economics and Peace.

    34. HLAB, A Breakthrough for People and Planet, (New York: United Nations University, April 02023), xx.

    35. Since 02010 Brazil, Russia, India, China, and South Africa have been the members of the BRICS. At their August 2023 summit in Johannesburg, the group voted to add Argentina, Egypt, Ethiopia, Iran, Saudi Arabia, and the United Arab Emirates, whereupon the BRICS would account for 47 percent of global population and nearly 37 percent of global gross domestic product (GDP) as measured by purchasing power parity (PPP) compared to the G7, which comprises less than 10 percent of global population and 30 percent of global GDP. Sixteen additional countries have applied for BRICS membership.

    36. Samir Saran, “The new world – shaped by self-interest,” Observer Research Foundation of India, May 24, 02023.

    37. American University, June 01963.

    38. Examples include Hong Kong, Macau, and Barbados, where colonialism continued until 01997, 01999, and 02021, respectively.

    39. The World Bank, the International Monetary Fund, regional development banks, etc.

    40. See the Wellbeing Economy Alliance (WEALL).

    41. See Doughnut Economics by Kate Raworth, 02017.

    42. See the work of Dani Rodrik.

    43. https://en.wikipedia.org/wiki/Degrowth#

    44. Michael Doyle, Cold Peace, 02023. See also Richard Sakwa, The Lost Peace, 02023.

    45. The first words of the U.N. Charter are “We the peoples of the United Nations…”

    46. At COP 28 in Dubai (2023), it was nearly impossible to agree on a location for COP 29 due to persistent objections by one state: Russia.

    47. HLAB op. cit. 6.

    48. IPIE Strategy and Working Paper, PeaceTech Lab, October 02022.

    49. Ian Hogarth, “We Must Slow Down the Race to God-like AI,” Financial Times, April 13, 02023.

    50. It is widely known that a ninth state, Israel, possesses an undeclared arsenal of nuclear weapons.

    51. See Jonathan Blake and Nils Gilman, “Governing in the Planetary Age,” Noema, March 9, 02021.

    52. See V-Dem Institute, “The Case for Democracy: Does Democracy Bring International and Domestic Peace and Security?” May 11, 02021, https://v-dem.net/media/publications/pb30.pdf .

    53. The Varieties in Democracy Institute (or V-Dem) is a global network of social scientists who collaborate in publishing reports assessing the state of democracy worldwide.

    54. Raphael Minder, “Inside Donald Tusk’s Divisive Campaign to Restore Polish Democracy,” Financial Times, February 18, 02024.

    55. The author served as co-chair of this commission. The report can be found at https://www.amacad.org/sites/default/files/publication/downloads/2020-Democratic-Citizenship_Our-Common-Purpose.pdf .

    56. Innovative Citizen Participation and New Democratic Institutions, OECD, June 02020.

    57. Joseph Nye, op-ed, Financial Times, November 17, 02023.

    58. See Stephen Roach, “A New Architecture for US-China Engagement,” China-US Focus, June 8, 02023.

    59. https://www.ourchildrenstrust.org/juliana-v-us .

    60. A more comprehensive explanation of these thoughts can be found here: https://www.robertboschacademy.de/en/perspectives/transformed-us-leadership-age-turbulence .

    This essay was originally published by the Rockefeller Brothers Fund and is republished here under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. The original version can be accessed at Rockefeller Brothers Fund.

    365 TomorrowsI May Be Gone Some Time

    Author: K. E. Redmond He stared at the blue and white globe passing beneath him, watching the dark shadow cut across its surface. Once, the dark had been alive with light like glowing fungus. He’d imagined pearls of highways, puddles beneath streetlamps, neon signs. As the lights winked out, the smog dissipated. In daylight, he […]

    The post I May Be Gone Some Time appeared first on 365tomorrows.

    Worse Than FailureCodeSOD: An Argument With QA

    Markus does QA, and this means writing automated tests which wrap around the code written by developers. Mostly this is a "black box" situation, where Markus doesn't look at the code, and instead goes by the interface and the requirements. Sometimes, though, he does look at the code, and wishes he hadn't.

    Today's snippet comes from a program which is meant to generate PDF files and then, optionally, email them. There are a few methods we're going to look at, because they invested a surprising amount of code into doing this the wrong way.

    protected override void Execute()
    {
    	int sendMail = this.VerifyParameterValue(ParamSendMail);
    
    	if (sendMail == -1)
    		return;
    
    	if (sendMail == 1)
    		mail = true;
    
    	this.TraceOutput(Properties.Resources.textGetCustomerForAccountStatement);
    	IList<CustomerModel> customers = AccountStatement.GetCustomersForAccountStatement();
    	if (customers.Count == 0) return;
    
    	StreamWriter streamWriter = null;
    	if (mail)
    		streamWriter = AccountStatement.CreateAccountStatementLogFile();
    
    	CreateAccountStatementDocumentEngine engine = new CreateAccountStatementDocumentEngine();
    
    	foreach (CustomerModel customer in customers)
    	{
    		this.TraceOutput(Properties.Resources.textCustomerAccountStatementBegin + customer.DisplayName.ToString());
    
    		// Generate the PDF, optionally send an email with the document attached
    		engine.Execute(customer, mail);
    
    		if (mail)
    		{
    			AccountStatement.WriteToLogFile(customer, streamWriter);
    			this.TraceOutput(Properties.Resources.textLogWriting);
    		}
    	}
    	engine.Dispose();
    	if (streamWriter != null)
    		streamWriter.Close();
    }
    

    Now, this might sound unfair, but right off the bat I'm going to complain about separation of concerns. This function both generates output and emails it (optionally), while handling all of the stream management. Honestly, I think if the developer were simply forced to go back and make this a set of small, cohesive methods, most of the WTFs would vanish. But there's more to say here.

    Specifically, let's look at the first few lines, where we VerifyParameterValue. Note that this function clearly returns -1 when it fails, which is a very C-programmer-forced-to-do-OO idiom. But let's look at that method.

    private int VerifyParameterValue(string name)
    {
    	string stringValue = this.GetParam(name, string.Empty);
    
    	bool isValid = this.VerifyByParameterFormat(name, stringValue);
    
    	if (!isValid)
    		return -1;
    
    	int value = -1;
    
    	try
    	{
    		value = Convert.ToInt32(stringValue);
    	}
    	catch (Exception e)
    	{
    		this.TraceOutput(string.Concat("\n\n\n", e.Message, "\n\n\n"));
    
    		return -1;
    	}
    	return value;
    }
    

    We'll come back to the VerifyByParameterFormat but otherwise, this is basically a wrapper around Convert.ToInt32, and could easily be replaced with Int32.TryParse.

    Bonus points for spamming the log output with loads of newlines.

    Okay, but what is the VerifyByParameterFormat doing?

    private bool VerifyByParameterFormat(string name, string value)
    {
    	string parameter = string.Empty;
    	string message = string.Empty;
    	
    	if (value.Length != 1)
    	{
    		parameter = Properties.Resources.textSendMail;
    		message = string.Format(Properties.Resources.textSendMailNotValid, value);
    
    		this.TraceOutput(string.Concat("\n\n\n", message, "\n\n\n"));
    
    		return false;
    	}
    
    	string numbers = "0123456789";
    	char[] characters = value.ToCharArray();
    
    	for (byte i = 0; i < characters.Length; i++)
    	{
    		if (numbers.IndexOf(characters[i]) < 0)
    		{
    			parameter = Properties.Resources.textSendMail;
    			message = Properties.Resources.textSendMailNotValid;
    			
    			this.TraceOutput(string.Concat("\n\n\n", message, "\n\n\n"));
    			return false;
    		}
    	}
    	return true;
    }
    

    Oh, it just goes character by character to verify whether or not this is made up of only digits. Which, by the way, means the CLI argument needs to be an integer, and only when that integer is 1 do we send emails. It's a boolean, but worse.

    Let's assume, however, that passing numbers is required by the specification. Still, Markus has thoughts:

    Handling this command line argument might seem obvious enough. I'd probably do something along the lines of "if (arg == "1") { sendMail = true } else if (arg != "0") { tell the user they're an idiot }. Of course, I'm not a professional programmer, so my solution is way too simple as the attached piece of code will show you.

    There are better ways to do it, Markus, but as you've shown us, there are definitely worse ways.

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    ,

    Cryptogram The Combined Cipher Machine

    Interesting article—with photos!—of the US/UK “Combined Cipher Machine” from WWII.

    Worse Than FailureCodeSOD: Wrap Up Your Date

    Today, we look at a simple bit of bad code. The badness is not that they're using Oracle, though that's always bad. But it's how they're writing this PL/SQL stored function:

      FUNCTION CONVERT_STRING_TO_DATE --Public
        (p_date_string IN Varchar2,
           p_date_format IN Varchar2 DEFAULT c_date_format)
        Return Date
    
       AS
    
       BEGIN
    
        If p_date_string Is Null Then
            Return Null;
          Else
            Return To_Date(p_date_string, p_date_format);
          End If;
    
       END;  -- FUNCTION CONVERT_STRING_DATE
    

    This code is a wrapper around the to_date function. The to_date function takes a string and a format and returns that format as a date.

    This wrapper adds two things, and the first is a null check. If the input string is null, just return null. Except that's exactly how to_date behaves anyway.

    The second is that it sets the default format to c_date_format. This, actually, isn't a terrible thing. If you check the docs on the function, you'll see that if you don't supply a format, it defaults to whatever is set in your internationalization settings, and Oracle recommends that you don't rely on that.

    On the flip side, this code is used as part of queries, not on processing input, which means that they're storing dates as strings, and relying on the application layer to send them properly formatted strings. So while their to_date wrapper isn't a terrible thing, storing dates as strings definitely is a terrible thing.

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    365 TomorrowsZairajah

    Author: Majoki It started with a chatbot and ended in, well, that would be predicting the future. Which is exactly my problem. I’m sure I’m not the only computer science graduate student into astrology, Tarot cards, numerology, palm reading, and other fortune-telly kind of things, but I’m the one who, late one night, asked a […]

    The post Zairajah appeared first on 365tomorrows.