Planet Russell

,

Planet DebianGuido Günther: Free Software Activities November 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.43 release, the initial file chooser portal, phosh-osk-stub now handling digit, number, phone and PIN input purpose via special layouts as well as Phoc mostly catching up with wlroots 0.18 and the current development version targeting 0.19.

phosh

  • When taking a screenshot via keybinding or power button long press save screenshots to clipboard and disk (MR)
  • Robustify Screenshot CI job (MR)
  • Update CI pipeline (MR)
  • Fix notifications banners that aren't tall enough not being shown (MR). Another 4y old bug hopefully out of the way.
  • Add rfkill mock and docs (MR). Useful for HKS testing.
  • Release 0.43~rc1 and 0.43
  • Drop libsoup workaround (MR)
  • Ensure notification only takes its actual height (MR)

phoc

  • Move wlroots 0.18 update forward (MR). Needs a bit more work before we can make it default.
  • Catch up with wlroots development branch (MR) allowing us to test current wlroots again.
  • Some of the above already applies to main so schedule it for 0.44 (MR)

phosh-mobile-settings

  • Don't mark release notes as translatable to save some i18n effort (MR)
  • Release 0.43~rc1 and 0.43.0

libphosh-rs

phosh-osk-stub

  • Add layouts for PIN, number and phone input purpose (MR)
  • Release 0.43~rc1
  • Ensure translation get picked up, various cleanups and release 0.43.0 (MR)
  • Make desktop file match app-id (MR)

phosh-tour

  • Fix typo and reduce number of strings to translate (MR)
  • Add translator comments (MR). This, the above and additional fixes in p-m-s were prompted by i18n feedback from Alexandre Franke, thanks a lot!
  • Release 0.43.0

pfs

  • Initial version of the adaptive file chooser dialog using gtk-rs. See demo.
  • Allow to activate via double click (for non-touch use) (MR)

xdg-desktop-portal-phosh

  • Use pfs to provide a file chooser portal (MR)

meta-phosh

  • Slightly improve point release handling (MR)
  • Improve string freeze announcements and add phosh-tour (MR)

Debian

  • Upload Phosh 0.43.0~rc1 and 0.43.0 (MR, MR, MR, MR, MR, MR, MR, MR, MR, MR, MR)
  • meta-phosh: Add Recommend: for xdg-desktop-portal-phosh (MR)
  • phosh-osk-data got accepted, create repo, brush up packaging and upload to unstable (MR
  • phosh-osk-stub: Recommend data packager (MR
  • Phosh: drop reverts (MR)
  • varnam-schemes: Fix autopkgtest (MR)
  • varnam-schemes: Improve packaging (MR)
  • Prepare govarnam 1.9.1 (MR)

Calls

  • ussd: Set input purpose and switch to AdwDialog (MR, Screenshot)

libcall-ui

  • Drop libhandy leftover (MR)

git-buildpackage

  • Improve docs and cleanup markdown (MR)
  • Mention gbp push in intro (MR)
  • Use application instead of productname entities to improve reading flow (MR)

wlroots

  • Drop mention of wlr_renderer_begin_with_buffer (MR)

python-dbusmock

  • Add mock for gsd-rfkill (MR)

xdg-spec

  • Sync notification categories with the portal spec (MR)
  • Add categories for SMS (MR)
  • Add a pubdate so it's clear the specs aren't stale (MR) (got fixed in a different and better way, thanks Matthias!)

ashpd

  • Allow to set filters in file chooser portal demo (MR)

govarnam

  • Robustify file generation (MR)

varnam-schemes

  • Unbreak tests on non intel/amd architectures (e.g. arm64) (MR)

Reviews

This is not code by me but reviews I did on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • flathub: livi runtime and gst update (MR)
  • phosh: Split linters into their own test suite (MR)
  • phosh; QuickSettings follow-up (MR)
  • phosh: Accent color fixes (MR)
  • phosh: Notification animation (MR)
  • phosh: end-session dialog timeout fix (MR)
  • phosh: search daemon (MR)
  • phosh-ev: Migrate to newer gtk-rs and async_channel (MR)
  • phosh-mobile-settings: Update gmobile (MR)
  • phosh-mobile-settings: Make panel-switcher scrollable (MR)
  • phosh-mobile-settings: i18n comments (MR)
  • gbp doc updates (MR)
  • gbp handle suite names with number prefix (MR)
  • Debian libvirt dependency changes (MR
  • Chatty: misc improvements (MR
  • iio-sensor-proxy: buffer driver without trigger (MR)
  • gbp doc improvements (MR)
  • gbp: More doc improvements (MR)
  • gbp: Clean on failure (MR)
  • gbp: DEP naming consistency (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

Planet DebianJunichi Uekawa: Lots of travel and back to Tokyo.

Lots of travel and back to Tokyo. Then I got sick. Trying to work on my bass piece, but it's really hard and I am having hard time getting to a reasonable shape. Discussions on Debconf 2026 bid. Hoping it will materialize soon.

365 TomorrowsShadow Memorabilia

Author: Rick Tobin Jason continued to turn a small half-fried reptile in a solar often. Cooking took longer on this world with its distant red sun. Bursts of drifting dust blew over him and his two companion portal flyers. Emily was rinsing her hair delicately with precious water from the tiny oasis near the rocky […]

The post Shadow Memorabilia appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Unexploded Remnants

Review: Unexploded Remnants, by Elaine Gallagher

Publisher: Tordotcom
Copyright: 2024
ISBN: 1-250-32522-6
Format: Kindle
Pages: 111

Unexploded Remnants is a science fiction adventure novella. The protagonist and world background would support an episodic series, but as of this writing it stands alone. It is Elaine Gallagher's first professional publication.

Alice is the last survivor of Earth: an explorer, information trader, and occasional associate of the Archive. She scouts interesting places, looks for inconsistencies in the stories the galactic civilizations tell themselves, and pokes around ruins for treasure. As this story opens, she finds a supposedly broken computer core in the Alta Sidoie bazaar that is definitely not what the trader thinks it is. Very shortly thereafter, she's being hunted by a clan of dangerous Delosi while trying to decide what to do with a possibly malevolent AI with frightening intrusion abilities.

This is one of those stories where all the individual pieces sounded great, but the way they were assembled didn't click for me. Unusually, I'm not entirely sure why. Often it's the characters, but I liked Alice well enough. The Lewis Carroll allusions were there but not overdone, her computer agent Bugs is a little too much of a Warner Brothers cartoon but still interesting, and the world building has plenty of interesting hooks. I certainly can't complain about the pacing: the plot moves briskly along to a somewhat predictable but still adequate conclusion. The writing is smooth and competent, and the world is memorable enough that I'm still thinking about it.

And yet, I never connected with this story. I think it may be because both Alice and the tight third-person narrator tend towards breezy confidence and matter-of-fact descriptions. Alice does, at times, get scared or angry, but I never felt those emotions. They were just events that were described to me. There wasn't an emotional hook, a place where the character grabbed me, and so it felt like everything was happening at an odd remove. The advantage of this approach is that there are no overwrought emotional meltdowns or brooding angstful protagonists, just an adventure story about a competent and thoughtful character, but I think I wanted a bit more emotional involvement than I got.

The world background is the best part and feels like it could be part of a larger series. The Milky Way is connected by an old, vast, and only partly understood network of teleportation portals, which had cut off Earth for unknown reasons and then just as mysteriously reactivated when Alice, then Andrew, drunkenly poked at a standing stone while muttering an old prayer in Gaelic. The Archive spent a year sorting out her intellectual diseases (capitalism was particularly alarming) and giving her a fresh start with a new body. Humanity subsequently destroyed itself in a paroxysm of reactionary violence, leaving Alice a free agent, one of a kind in a galaxy of dizzying variety and forgotten history.

Gallagher makes great use of the weirdness of the portal network to create a Star Wars style of universe: the focus is more on the diversity of the planets and alien species than on a coherent unifying structure. The settings of this book are not prone to Planet of the Hats problems. They instead have the contrasts that one would get if one dropped portals near current or former Earth population centers and then took a random walk through them (or, in other words, what playing GeoGuessr on a world map feels like). I liked this effect, but I have to admit that it also added to that sense of sliding off the surface of the story. The place descriptions were great bits of atmosphere, but I never cared about them. There isn't enough emotional coherence to make them memorable.

One of the more notable quirks of this story is the description of ideologies and prejudices as viral memes that can be cataloged, cured, and deployed like weapons. This is a theme of the world-building as well: this society, or at least the Archive-affiliated parts of it, classifies some patterns of thought as potentially dangerous but treatable contagious diseases. I'm not going to object too much to this as a bit of background and characterization in a fairly short novella stuffed with a lot of other world-building and plot, but there's was something about treating ethical systems like diseases that bugged me in much the same way that medicalization of neurodiversity bugs me. I think some people will find that sense of moral clarity relaxing and others will find it vaguely irritating, and I seem to have ended up in the second group.

Overall, I would classify this as an interesting not-quite-success. It felt like a side story in a larger universe, like a story that would work better if I already knew Alice from other novels and had an established emotional connection with her. As is, I would not really recommend it, but there are enough good pieces here that I would be interested to see what Gallagher does next.

Rating: 6 out of 10

,

Planet DebianDima Kogan: Strava track filtering validation

After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled.

I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering).

I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect.

Clearly this is not scientific, but it's something.

The code

Parsing .gpx is slow (this is a big file), so I cache that into a .vnl:

import sys
import gpxpy

filename_in  = 'INPUT.gpx'
filename_out = 'OUTPUT.gpx'

with open(filename_in, 'r') as f:
    gpx = gpxpy.parse(f)

f_out = open(filename_out, 'w')

tracks = gpx.tracks
if len(tracks) != 1:
    print("I want just one track", file=sys.stderr)
    sys.exit(1)
track = tracks[0]

segments = track.segments
if len(segments) != 1:
    print("I want just one segment", file=sys.stderr)
    sys.exit(1)
segment = segments[0]

time0 = segment.points[0].time
print("# time lat lon ele_m")
for p in segment.points:
    print(f"{(p.time - time0).seconds} {p.latitude} {p.longitude} {p.elevation}",
          file = f_out)

And I process this data with the different filters (this is a silly Python loop, and is slow):

#!/usr/bin/python3

import sys
import numpy as np
import numpysane as nps
import gnuplotlib as gp
import vnlog
import pyproj

geod = None
def dist_ft(lat0,lon0, lat1,lon1):

    global geod
    if geod is None:
        geod = pyproj.Geod(ellps='WGS84')
    return \
        geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12.




f = 'OUTPUT.gpx'

track,list_keys,dict_key_index = \
    vnlog.slurp(f)

t      = track[:,dict_key_index['time' ]]
lat    = track[:,dict_key_index['lat'  ]]
lon    = track[:,dict_key_index['lon'  ]]
ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12.



@nps.broadcast_define( ( (), ()),
                       (2,))
def filter_track(ele_hysteresis_ft,
                 dxy_hysteresis_ft):

    dist        = 0.0
    ele_gain_ft = 0.0

    lon_accepted = None
    lat_accepted = None
    ele_accepted = None

    for i in range(len(lat)):

        if ele_accepted is not None:
            dxy_here  = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i])
            dele_here = np.abs( ele_ft[i] - ele_accepted )

            if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft:
                continue

            if ele_ft[i] > ele_accepted:
                ele_gain_ft += dele_here;

            dist += np.sqrt(dele_here * dele_here +
                            dxy_here  * dxy_here)

        lon_accepted = lon[i]
        lat_accepted = lat[i]
        ele_accepted = ele_ft[i]

    # lose the last point. It simply doesn't matter

    dist_mi = dist / 5280.
    return np.array((ele_gain_ft, dist_mi))




Nele_hysteresis_ft    = 20
ele_hysteresis0_ft    = 5
ele_hysteresis1_ft    = 100
ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft,
                                    ele_hysteresis1_ft,
                                    Nele_hysteresis_ft)

Ndxy_hysteresis_ft = 20
dxy_hysteresis0_ft = 5
dxy_hysteresis1_ft = 1000
dxy_hysteresis_ft  = np.linspace(dxy_hysteresis0_ft,
                                 dxy_hysteresis1_ft,
                                 Ndxy_hysteresis_ft)


# shape (Nele,Ndxy,2)
gain,distance = \
    nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1),
                          dxy_hysteresis_ft),
            -1,0 )


# Stolen from mrcal
def options_heatmap_with_contours( plotoptions, # we update this on output

                                   *,
                                   contour_min           = 0,
                                   contour_max,
                                   contour_increment     = None,
                                   do_contours           = True,
                                   contour_labels_styles = 'boxed',
                                   contour_labels_font   = None):
    r'''Update plotoptions, return curveoptions for a contoured heat map'''

    gp.add_plot_option(plotoptions,
                       'set',
                       ('view equal xy',
                        'view map'))

    if do_contours:
        if contour_increment is None:
            # Compute a "nice" contour increment. I pick a round number that gives
            # me a reasonable number of contours

            Nwant = 10
            increment = (contour_max - contour_min)/Nwant

            # I find the nearest 1eX or 2eX or 5eX
            base10_floor = np.power(10., np.floor(np.log10(increment)))

            # Look through the options, and pick the best one
            m   = np.array((1., 2., 5., 10.))
            err = np.abs(m * base10_floor - increment)
            contour_increment = -m[ np.argmin(err) ] * base10_floor

        gp.add_plot_option(plotoptions,
                           'set',
                           ('key box opaque',
                            'style textbox opaque',
                            'contour base',
                            f'cntrparam levels incremental {contour_max},{contour_increment},{contour_min}'))

        if contour_labels_font is not None:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%d" font "{contour_labels_font}"' )
        else:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%.0f"' )

        plotoptions['cbrange'] = [contour_min, contour_max]

        # I plot 3 times:
        # - to make the heat map
        # - to make the contours
        # - to make the contour labels
        _with = np.array(('image',
                          'lines nosurface',
                          f'labels {contour_labels_styles} nosurface'))
    else:
        gp.add_plot_option(plotoptions, 'unset', 'key')
        _with = 'image'

    using = \
        f'({dxy_hysteresis0_ft}+$1*{float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1)}):' + \
        f'({ele_hysteresis0_ft}+$2*{float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1)}):3'
    plotoptions['_3d']     = True
    plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft]
    plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft]
    plotoptions['ascii']   = True # needed for using to work

    gp.add_plot_option(plotoptions, 'unset', 'grid')

    return \
        dict( tuplesize=3,
              legend = "", # needed to force contour labels
              using = using,
              _with=_with)




contour_granularity = 1000
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(gain) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(gain,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Elevation gain (ft)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed gain vs filtering parameters')


contour_granularity = 10
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(distance) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(distance,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Distance (miles)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed distance vs filtering parameters')

Results: gain

Strava says the gain was 46307ft. The analysis says:

strava-gain.png

strava-gain-zoom.png

These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.

Results: distance

Strava says the distance covered was 322 miles. The analysis says:

strava-distance.png

strava-distance-zoom.png

Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.

Planet DebianEnrico Zini: New laptop setup

My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen™ 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed.

This post has the notes of all the provisioning steps, so that I can replicate them again if needed.

Installing Debian 12

Debian 12's installer just worked, with Secure Boot enabled no less, which was nice.

The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation.

I let it install the way it pleased, then I booted into grml for a round of gparted.

The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet.

Shrink partitions:

  • mount the root filesystem in /mnt
  • btrfs filesystem resize 6G /mnt
  • umount the root filesystem
  • lvresize -L 7G vgname/lvname
  • pvresize --setphysicalvolumesize /dev/mapper/pvname 8G
  • cryptsetup resize --device-size 9G name

note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't.

Resize with gparted:

Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know.

Regrow partitions:

  • cryptsetup resize name
  • pvresize /dev/mapper/pvname
  • lvresize -L 100% vgname/lvname
  • mount the root filesystem in /mnt
  • btrfs filesystem resize max /mnt
  • umount the root filesystem

Setup gnome

When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker.

This applies to Gnome as present in Debian Stable.

General Gnome settings tips

I can list all available settings with:

gsettings list-recursively

which is handy for grepping things like hotkeys.

I can manually set a value with:

gsettings set <schema> <key> <value>

and I can reset it to its default with:

gsettings reset <schema> <key>

Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:

gsettings set <schema>:<path> <key> <value>

Install appindicators

First thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session.

I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one.

Fix font sizes across monitors

My laptop screen and monitor have significantly different DPIs, so:

gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

And in Settings/Displays, set a reasonable scaling factor for each display.

Disable Alt/Super as hotkey for the Overlay

Seeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:

gsettings set org.gnome.mutter overlay-key ''

Focus-follows-mouse and Raise-or-lower

My desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore.

Thankfully Gnome can be configured that way, with some work:

  • In gnome-shell settings, keyboard, shortcuts, windows, set "Raise window if covered, otherwise lower it" to "Super+Escape"
  • In gnome-tweak-tool, Windows, set "Focus on Hover"

This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows.

The amount cannot be shortened, but it can be removed with:

gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false

Mouse and keyboard shortcuts

Gnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me.

It was a lot of work, and these are the steps I used to get rid of most of them.

Disable Super+N combinations that accidentally launch a questionable choice of programs:

for i in `seq 1 9`; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; done

Gnome-Shell settings:

  • Multitasking:
    • disable hot corner
    • disable active edges
    • set a fixed number of workspaces
    • workspaces on all displays
    • switching includes apps from current workspace only
  • Sound:
    • disable system sounds
  • Keyboard
    • Compose Key set to Caps Lock
    • View and Customize Shortcuts:
      • Launchers
        • launch help browser: remove
      • Navigation
        • move to workspace on the left: Super+Left
        • move to workspace on the right: Super+Right
        • move window one monitor …: remove
        • move window one workspace to the left: Shift+Super+Left
        • move window one workspace to the right: Shift+Super+Right
        • move window to …: remove
        • switch system …: remove
        • switch to …: remove
        • switch windows …: disabled
      • Screenshots
        • Record a screenshot interactively: Super+Print
        • Take a screenshot interactively: Print
        • Disable everything else
      • System
        • Focus the active notification: remove
        • Open the applcation menu: remove
        • Restore the keyboard shortctus: remove
        • Show all applications: remove
        • Show the notification list: remove
        • Show the overvire: remove
        • Show the run command prompt: remove (the default Gnome launcher is not for me) Super+F2 (or remove to leave it to the terminal)
      • Windows
        • Close window: remove
        • Hide window: remove
        • Maximize window: remove
        • Move window: remove
        • Raise window if covered, otherwise lower it: Super+Escape
        • Resize window: remove
        • Restore window: remove
        • Toggle maximization state: remove
    • Custom shortcuts
      • xfrun4, launching xfrun4, bound to Super+F2
  • Accessibility:
    • disable "Enable animations"

gnome-tweak-tool settings:

  • Keyboard & Mouse
    • Overview shortcut: Right Super. This cannot be disabled, but since my keyboard doesn't have a Right Super button, that's good enough for me. Oddly, I cannot find this in gsettings.
  • Window titlebars
    • Double-Click: Toggle-Maximize
    • Middle-Click: Lower
    • Secondary-Click: Menu
  • Windows
    • Resize with secondary click

Gnome Terminal settings:

Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid:

  • Shortcuts

    • New tab: Super+T
    • New window: Super+N
    • Close tab: disabled
    • Close window: disabled
    • Copy: Super+C
    • Paste: Super+V
    • Search: all disabled
    • Previous tab: Super+Page Up
    • Next tab: Super+Page Down
    • Move tab…: Disabled
    • Switch to tab N: Super+Fn (only available after disabling overview)
    • Switch to tab N with Alt+Fn cannot be configured in the UI: Alt+Fn is detected as simply Fn. It can however be set with gsettings:

      sh for i in `seq 1 12`; do gsettings set org.gnome.Terminal.Legacy.Keybindings:/org/gnome/terminal/legacy/keybindings/ switch-to-tab-$i "<Alt>F$i"; done

  • Profile

    • Text
      • Sound: disable terminal bell

Other hotkeys that got in my way and had to disable the hard way:

for n in `seq 1 12`; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]'
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]'
gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]'
gsettings set org.gnome.desktop.interface menubar-accel '[]'

Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:

$ gsettings list-recursively|grep F10
org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'

I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me.

Appearance

Gnome-Shell settings:

  • Appearance:
    • dark mode

gnome-tweak-tool settings:

  • Fonts
    • Antialiasing: Subpixel
  • Top Bar
    • Clock/Weekday: enable (why is this not a default?)

Gnome Terminal settings:

  • General
    • Theme variant: Dark (somehow it wasn't picked by up from the system settings)
  • Profile
    • Colors
      • Background: #000

Other decluttering and tweaks

Gnome Shell Settings:

  • Search
    • disable application search
  • Removable media
    • set everything to "ask what to do"
  • Default applications
    • Web: Chromium
    • Mail: mutt
    • Calendar: khal is not sadly an option
    • Video: mpv
    • Photos: Geequie

Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:

gsettings set org.gnome.desktop.screensaver lock-delay 30

Extensions

I enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:

gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'

I also enabled:

  • Removable Drive Menu: why is this not on by default?
  • Workspace Indicator
  • Ubuntu Appindicators (apt install gnome-shell-extension-appindicator and restart Gnome)

I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction.

I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access.

Yuck.

Run program dialog

The default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there.

It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:

gsettings list-recursively|grep custom-key
org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']

I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland:

  • fuzzel does not start
  • gmrun is gtk2, last updated in 2016, but works fine
  • kupfer segfaults as I type
  • rofi shows, but can't get keboard input
  • shellex shows a white bar at top of the screen and lots of errors on stderr
  • superkb wants to grab the screen for hotkeys
  • synapse searched news on the internet as I typed, which is a big no for me
  • trabucco crashes on startup
  • wofi works but looks like very much an acquired taste, though it has some completion that makes it more useful than Gnome's run dialog
  • xfrun4 (package xfce4-appfinder) struggles on wayland, being unable to center its window and with the pulldown appearing elsewhere in the screen, but it otherwise works

Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is.

TODO

  • Figure out what is still binding F10 to menu, and what I can do about it
  • Figure out how to reduce the size of window titlebars, which to my taste should be unobtrusive and not take 2.7% of vertical screen size each. There's a minwaita theme which isn't packaged in Debian. There's a User Theme extension, and then the whole theming can of worms to open. For another day.
  • Figure out if Gnome can be convinced to resize popup windows? Take the Gnome Terminal shortcut preferences for example: it takes ⅓ of the vertical screen and can only display ¼ of all available shortcuts, and I cannot find a valid reason why I shouldn't be allowed to enlarge it vertically.
  • Figure out if I can place shortcut launcher icons in the top panel, and how

I'll try to update these notes as I investigate.

Conclusion so far

I now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable.

It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use.

Thankfully it can be switched off, and now I have notes to do it again if needed.

gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths.

At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that.

Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.

Planet DebianAurelien Jarno: UEFI Unified Kernel Image for Debian Installer on riscv64

On the riscv64 port, the default boot method is UEFI, with U-Boot typically used as the firmware. This approach aligns more closely with other architectures, which avoid developping riscv64 specific code. For advanced users, booting using U-Boot and extlinux is possible, thanks to the kernel being built with CONFIG_EFI_STUB=y.

The same applies to the Debian Installer, which is provided as ISO images in various sizes and formats like on other architectures. These images can be put on a USB drive or an SD-card and booted directly from U-Boot in UEFI mode. Some users prefer to use the netboot "image", which in practice consists of a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files.

However, booting this in UEFI mode is not straightforward, unless you use a TFTP server, which is also not trivial. Less known to users, there is also a corresponding mini.iso image, which contains all the above plus a bootloader. This offers a simpler alternative for installation, but depending on your (vendor) U-Boot version this still requires to go through a media.

Systemd version 257-rc2 comes with a great new feature, the ability to include multiple DTB files in a single UKI (Unified Kernel Image) file, with systemd-stub automatically loading the appropriate one for the current hardware. A UKI file combines a UEFI boot stub program, a Linux kernel image, an optional initrd, and further resources in a single UEFI PE file. This finally solves the DTB problem in the UEFI world for distributions, as a single image can work on multiple boards.

Building upon this, debian-installer on riscv64 now also creates a UEFI UKI mini.efi image, which contains systemd-stub, a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files. Using this image also ensures that the system is booted in UEFI mode. Booting it with debian-installer is as simple as:

load mmc 0:1 mini.efo $kernel_addr_r # (can also be done using tftpboot, wget, etc.)
bootefi $kernel_addr_r

Additional parameters can be passed to the image using the U-Boot bootargs environment variable. For instance, to boot in rescue mode:

setenv bootargs "rescue/enable=true"

365 TomorrowsThe Machine

Author: Palmer Caine Between gates things get weird. Perception splinters to span myriad levels, too many to navigate, too many to understand. Like of Galaxy of mirrors, everything reflected infinitesimally. Or so it seems. Maybe a fly with its many segmented eyes could fashion a path, but not mere humanity, and certainly not me. The […]

The post The Machine appeared first on 365tomorrows.

David BrinWhat Democrats did wrong... Three categories of rationalizing - while ignoring why they hate us.

At the end I'll cite some book and SF news, including some fun! Like part two of my comedy, The Ancient Ones.


Only now we'll return to the topic on everyone’s mind… WTF just happened?  And what should we do now?


We'll start with Nathan Gardels – editor & publisher of the excellent Noēma Magazine - who always offers interesting observations. Though, he often triggers my infamously ornery “Yes… but…” reflex and a too-long response. (Several posts here originated in rémise to Nathan.)


In a recent missive - "How to Soul-Search as a Losing Party" - appraising What Democrats did wrong, Gardels points out many valid things… while reaching a conclusion that I deem spectacularly mistaken. Taking note of how so many Black and Hispanic males abandoned the old, Rooseveltean Coalition, he joins with so many others out there, urging a campaign of gentle conciliation.


Nathan cites a raft of earnest intellectuals, as well as deliberative ‘citizens panels’ that have – in Europe – shown some success at getting participants to bridge the dogmatic gaps that divided them. Indeed, such experiments have been terrific! It is the mature approach. And it works… 

...with those who are already drawn far enough into the process to leave behind their familiarly comfortable, polemical echo chambers. Forsaking today’s media Nuremberg rallies, in order to participate. 

“(O)nce informed and empathetically exposed to the concerns of others, participants move from previously held dispositions toward a consensus.“


Indeed, that participation can be widespread! As in the Truth and Reconciliation process led by Nelson Mandela, in South Africa, and similar endeavors in Argentina and Chile, wherein vast swathes of the public – on all sides – realized they must do this… or risk losing everything.


As for it happening in today’s USA? Well, I can think of one actual, real world example. 

         

All across the nation, grand juries are selected from randomly-chosen voters and vetted for general reasonableness. In a majority of American counties, the resulting panels consist largely of fairly typical white retirees. And yet, it has been exactly those red county white retirees who – after exposure to arguments and copious evidence – have indicted so many Republican politicians and associates of a vast range of crimes.


    I’d argue that is a kind of fact-based consensus-building, even if it leads to some well-deserved pain by the fomenters of one side.


That is the first of many reasons why the masters of that side will have no interest in allowing wider versions of consensus building.


I do not see any hope of such a thing happening in today’s America, at any kind of scale.


…with one barely plausible exception.



== Get the kompromat-compromised to trade 'Truth' for 'Reconciliation' ==


I am on record proposing an American version of a Truth & Reconciliation process. It’s kind of aggressive, like those grand juries, but it could begin a tsunami of revelation and light, leading to millions seeking common ground.


It might begin with one brave act. One so shocking and disruptive that it could rattle the echo chambers and draw millions of ostrich heads out of media holes. It might happen even right now, at the tail end of 2024, if Joe Biden were to offer the incentive of pardons/clemency, in order to draw forward any politicians in DC to admit that they are snared by blackmail. 


As I say elsewhere, the pervasiveness of widespread blackmail in Washington is widely known in counter-intelligence circles. Honeypot entrapment of western elites has long been a specialty of Russian intel services – Okhrana, Checka, NKVD, KGB and FSB – all the way back to czarist times. Moreover, three Republican Congress members have recently attested to it likely being widespread among their GOP colleagues. 


And hence, perhaps the incentive of presidential clemency just might be enough to draw some heroic – or simply fed-up – blackmail victims into cleansing light. And once a few have done so, others might follow, from all parties. 


And yes, I do believe it’s one path that could lead to a Truth & Reconciliation process in America.


On the other hand, could T&R be achieved by preaching for a nationwide flow of commensal consensus, based upon building touchy-feely ‘mutual respect’ and listening? 


Now?  


That is fantasy. 

Especially at this moment. 

Because we have nothing to offer to those who are getting exactly what they want, right now. 


You know perfectly well what that is, if you ask around, or follow social media at all. There is one voluptuous satisfaction that tens of millions of core MAGA folks seek – and are getting – that fills them with giddy joy, above all. To drink our tears.


If you do not know this, then you really, really need to get out more.


Anyone who thinks they can placate that with ‘can we all just get along?’ has no memory of the middle school playground, where we learned one of the deepest expressions of human nature -- from bullies, whose greatest joy came from hearing nerdy victims cry out - “Can we talk this out?”



== Twin prescriptions that are guaranteed to fail ==

 

Today’s Chasm of Political Recriminations within Blue America appears to be similarly unbridgeable. 


First there’s a left wing that wants only to double down exactly upon a raft of combative identity stances that didn’t work… 


(Abortion! Racism! Pronouns! Shun Bill Maher! Forget the economy; it’s all about abortion! And did I mention abortion? And abortion!) 


… vs. those murmuring “we need to reach out for consensus!” Consensus with those who have openly declared hatred of every single fact-using profession in America, along with universities, science, the civil service, the FBI and even the U.S. military officer corps. 


To be clear, I am not rejecting consensus building! There have been times when rational politics used to be about negotiation, and those days may come again. 


Please. If you read and grasp nothing else here, understand the following history lesson.


In olden times, Republican and Democratic legislators would socialize and get to know, rather than demonize, each other. Their kids went to the same schools! That is, until Dennis “friend to boys” Hastert established a rule (look it up) that GOP representatives must stash their families in the Home District and spend as little time as possible in Washington. And - above all - demonize those on the other side of the aisle.


During some previous eras, a president was able to negotiate – even horse-trade – for a few votes needed by this or that nominee. And each appointment was considered separately.


This was true even as late as the Speakership of Newt Gingrich who, for all of his fiery, right wingism, was there to negotiate and to pass legislation needed by the country. Hence we got Welfare Reform and the Budget Act and Clinton Surpluses.


Alas, at that point Karl Rove’s program to expand gerrymandering shifted the locus of power in hundreds of districts, away from the General Election over to district primaries. Primaries in which radical partisans gained outsized sway. It happened in both parties, but especially in the GOP. Threats of ‘being primaried’ became fierce tools to enforce uniformity.


(There are ways to defeat this! Decisively, in fact. Methods that don’t even require legislation. One simple, nationwide information campaign could destroy the effectiveness of Primary Radicalization… and no party politician will discuss it.)



== The roots of our present political impasse ==


This transformation reached fruition with the 1996 Congressional putsch, when Newt was jettisoned without so much as a thank you and replaced by a later-convicted child predator, whose “Hastert Rule” has ever since declared a political death sentence for any Republican who – ever again – actually negotiates with Democrats. 


This resulted in the most tightly disciplined party and politburo America ever saw. (And some of the laziest, worst Congresses in U.S. history. Only once in the last 28 years has there been a session that passed needed legislation that directly resulted in major benefits for the nation.)

How effective is Hastert-Discipline?  No hypocrisy is too great. As when GOP Senate Majority Leader Mitch McConnell refused even to meet with Obama nominees more than 13 months before the next election… but hurried to confirm Trump’s final appointments one month before Biden took office. Even “deeply concerned” Senators Collins and Murkowski get back in line at the slightest warning look from Trump or from Trump’s Potemkin puppeteer.


And all of that leaves out speculative supplements, like my well-based conjecture that blackmail is rife in D.C. 


And so… amid all those highly refined tools of fanaticism, radicalization and discipline-enforcement… are we somehow supposed to seek consensus, when every single incentive is designed to thwart it?



== Bitter partisanship is a recurring American norm ==


Again and again, I am appalled by an unwillingness by our brainy, punditry castes ever to look at history. 

  • Like the 6000 years when 99% of human societies fell into drearily similar patterns of feudalism, dominated by male bullies who enforced power based on an inherited ruling class. 
  • Or how the American Experiment - in escaping feudalism - has experienced rhythmic pulses of cultural strife, with pretty much similar casts of characters, across 240-years. 
  • Or how Franklin Delano Roosevelt forged an alliance of rich, middle and poor that rendered Marxist notions of class war obsolete for a while… until Old Karl has lately been revived to fresh pertinence, by those who forget.

This latest phase of the recurring U.S. Civil War goes far beyond simply snaring the GOP political caste, as we saw in the previous section. It has been vital to re-create the 1860s alliance of poor whites with their rich overlords, in shared hatred of modernists. This required perfection of masturbatory media, offering in-group solidarity based on a Cultural Schism that has divided America since its inception. 


(Look up how in 1850s plantation-lords arranged to burn every southern newspaper that did not hew to the slavocracy line.)


Want a keen insight about all this from a brilliant science fiction author? No, I mean the revered (if somewhat libertarian) Robert A. Heinlein, who describes a recurring American illness.  In projecting a future America dominated by religious fundamentalism, he adds:


"Throw in a Depression for good measure, promise a material heaven here on earth, add a dash of anti-Semitism, anti-Catholicism, anti-Negrosim, and a good large dose of anti-“furriners” in general and anti-intellectuals here at home, and the result might be something quite frightening – particularly when one recalls that our voting system is such that a minority distributed as pluralities in enough states can constitute a working majority in Washington." 


And he wrote that in the 1950s.



== So how to fix what went wrong in 2024? ==


I speak elsewhere about this recurring American psychic and political chasm. Biliously-addictive Culture War explains Red America’s rage, far better than self-flagellatory riffs like: “We blues are at fault for refusing to listen to legitimate rural concerns.”


Excuse me. From FDR to LBJ to Clinton and Obama, rural America has received generous largesse that transformed ‘hick’ Southern and Appalachia states into modern hubs, surrounded by comfortable towns that – under Biden – just received huge waves of infrastructure repair and high-speed Internet. Unemployment is super-low and inflation has fallen.  


Did the Harris campaign fail to make all that clear?  Of course they did. And that failure was godawful. 


But nothing we try, no statistical proofs… and certainly no ‘outreach and listen’ campaign… ever stood a chance against the drug-like power of sanctimony. The volcanic flows of ingrate-hate pouring from Trumpian America, toward…


… toward whom? 


Leftists claim that the hated groups are races/genders etc. And while there is some of that, their obsession is - in its own right - poorly based sanctimony-delusion. In its own right, it is delusionally insane.


Test it! Just watch Fox some evening and count the number of minutes spent spewing outright racism or repression of gender variety, or attacking the poor. 


All of that is as a droplet next to tsunamis of bile aimed at … nerds. At fact professions. At civil servants. At the FBI and intel agencies. At the U.S. military officer corps. At exactly those who are targeted by Project 2025.


Elsewhere I go into the WHY of this open and insatiable hatred of every single fact-wielding profession. It's exactly the same cultural phenomenon as when Southern white males supported King George against city merchants… and supported slavocrat plantation lords, their actual class enemies, against urban northern sophisticates. And supported Gilded Age plutocrats against the original Progressives…


…and who now support today’s lucre-oligarchy against ‘smug university-smartypants know-it-alls’. The professionals who stand in the way of feudalism’s return. 


(Just watch who Trump goes after… and how the red folk who you want us to ‘reach out to and understand’ will cry out gleefully, with every shout of nerdy pain.)



== Defend what they most avidly seek to destroy ==


Can such masturbatory joy at defeating all fact people be assuaged with ‘reaching out’ sessions seeking ‘consensus’?


Okay, sure. Give it a try. It seems worthwhile! I might be wrong!

But if I'm right about this being phase 9 of America’s recurring cultural Civil War, then 
shall we look at how the previous phases were resolved? 


It doesn’t always have to involve violence! In fact, only one of those earlier phases was truly violent. And a couple were resolved by genius politicians like FDR!

But in this recurring madness, what never worked was supplication. Or looking weak.


What's worked is the same thing that caused bullies on the playground to step up from the dust, stare at the blood they just wiped from their noses, and go “Huh! I guess you aren’t meat, after all. Wanna come over and play X-Box?”


But sure. Read Nathan G's editorial in Noema! As usual, it is articulate and knowledgable and persuasive. So let's by all means assign some folks to give 'consensus-building' a try!  Go with the carrots that have never worked. But maybe this time.


Meanwhile, I plan to continue offering sticks. 

Tools for fact-folks to use. 

Tools that establishment politicians have never-ever-ever actually tried. At least none since FDR and LBJ.


Sticks that worked.





Planet DebianRussell Coker: Links November 2024

Interesting news about NVidia using RISC-V CPUs in all their GPUs [1]. Hopefully they will develop some fast RISC-V cores.

Interesting blog post about using an 8K TV as a monitor, I’m very tempted to do this [2].

Interesting post about how the Windows kernel development work can’t compete with Linux kernel development [3].

Paul T wrote an insightful article about the ideal of reducing complexity of computer systems and the question of from who’s perspective complexity will be reduced [4].

Interesting lecture at the seL4 symposium about the PANCAKE language for verified systems programming [5]. The idea that “if you are verifying your code types don’t help much” is interesting.

Interesting lecture from the seL4 summit about real world security, starts with the big picture and ends with seL4 specifics [6].

Interesting lecture from the seL4 summit about Cog’s work building a commercial virtualised phome [7]. He talks about not building a “brick of a smartphone that’s obsolete 6 months after release”, is he referring to the Librem5?

Informative document about how Qualcom prevents OSs from accessing EL2 on Snapdragon devices with a link to a work-around for devices shipped with Windows (not Android), this means that only Windows can use the hypervisor features of those CPUs [8].

Linus tech tips did a walk through of an Intel fab, I learned a few things about CPU manufacture [9].

Interesting information on the amount of engineering that can go into a single component. There’s lots of parts that are grossly overpriced (Dell and HP have plenty of examples in their catalogues) but generally aerospace doesn’t have much overpricing [10].

Interesting lecture about TEE on RISC-V with the seL4 kernel [11].

Ian Jackson wrote an informative blog post about the repeating issue of software licenses that aren’t free enough with Rust being the current iteration of this issue [12].

The quackery of Master Bates to allegedly remove the need for glasses is still going around [13].

,

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.10: Maintenance

A new version of the RcppAPT package arrived on CRAN earlier today. RcppAPT connects R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands (and their cache) which powering Debian, Ubuntu and other derivative distributions.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

This release moves the C++ compilation standard from C++11 to C++17. I had removed the setting for C++11 last year as compilation ‘by compiler default’ worked well enough. But the version at CRAN still carried, which started to lead to build failures on Debian unstable so it was time for an update. And rather than implicitly relying on C++17 as selected by the last two R releases, we made it explicit. Otherwise a few of the regular package and repository updates have been made, but no new code or features were added The NEWS entries follow.

Changes in version 0.0.10 (2024-11-29)

  • Package maintenance updating continuous integration script versions as well as coverage link from README, and switching to Authors@R

  • C++ compilation standards updated to C++17 to comply with libapt-pkg

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David Brin 2The Ancient Ones, Chapter 2

A Space Comedy by David Brin

The illustrated online chapter version

Chapter 2

PREVIOUSLY… we met Commander Alvin Montessori, ‘human advisor’ aboard the exploration vessel Clever Gamble, a mighty ship crewed mostly by demmies, a species who learned star travel from Earthlings – for which the galaxy is having some trouble forgiving us.

In orbit above a new world, the demmie commander – Captain Ohm – demands “Are they over 16 on the Turgenev Scale?”

When Alvin nods, Ohm cries out:

“Then we’re going down!  Let’s slurry!”

**

Alliance spacecraft look strange to the uninitiated.

Till recently, most starfaring races voyaged in efficient, globelike vessels, with small struts symmetrically arranged for the hyperdrive anchors. Transport to and from a planetary surface took place via orbital elevator at advanced worlds, or else by sensible little shuttles.

Like any prudent person, I’d be far happier traveling that way, but I try to hide the fact, and you students should too. Demmies cannot imagine why everyone doesn’t love slurry transport as much as they do. So, you can expect it to become the principal short-range system near all Alliance worlds.

It’s not so bad. After the first hundred or so times. Trust me. You can get used to anything.

As a demmie-designed exploration ship, the Clever Gamble looks like nothing else in the known universe. There are typically garish dem-style drive struts, looking like frosting swirls on some manic baker’s confection. These are linked to a surprisingly efficient and sensible engineering pod, which then clashes with a habitation module resembling some fairytale castle straight out of Hans Christian Andersen.

Then there is the Reel.

The Reel is a gigantic, protruding disk that takes up half the mass and volume of the ship, all in order to lug a prodigious, unbelievable hose all over the galaxy, frightening comets and intimidating the natives wherever we go. This conduit was already half-deployed by the time the ship’s artificer and healer met us in the slurry room. Through the viewer, we could see a tapering line descend toward the planet’s surface, homing in on a selected landing site.

The captain hopped about, full of ebullient energy. For the record, I reminded him that, contrary to explicit rules and common sense, the descent party once again consisted of the ship’s top four officers, while a fully-trained xenology team waited on standby, just three decks below.

“Are you kidding?” he replied. “I served on one of those teams, long ago. Boringest time I ever had.”

“But the thrill of contacting alien…”

“What contact? All’s we did was sit around while the top brass went down to all the new planets and did all the fighting and peacemaking and screwing. Well, it’s my turn now. Let ’em stew like I did!” He whirled to the reel operator. “Hose almost ready?”

“Aye sir. The Nozzle End has been inserted behind some shrubs in what looks like a park, in their biggest city.”

I sighed. This was not an approach I would have chosen. But most of the time you just have to go with the flow. It really is implacable. And things often turn out all right in the end. Surprisingly often.

The Captain rubbed his hands, raising visible sparks of static electricity. “Good. Then let’s see what’s down there!”

What can I say? Enthusiasm always was his most compelling trait. Ohm truly is hard to resist. Resignedly, I followed my leader to the dissolving room.

We were met outside by Ensign Nota Taken, who offered Ohm a tube to hold his non-organic tools. While the captain handed over his laser pistol and communicator, I was assisted by my own deputy – apprentice-advisor Frieder Koch – fresh out of Earth’s Academy and one of only ten humans aboard the Clever Gamble.

“Stay close to Commander Talon,” I murmured to Frieder, referring to the demmie officer left in charge.

“I will, Advisor,” he assured, both in words and with a moment of eye contact, conveying determination not to let me down. And, like any worried parent, I resigned myself to letting go.

You won’t hear much about Ensign Taken and Frieder for a while, but they figure later in my story.

Ohm and I entered the transporter room to join other members of the landing party. And at this point I suppose I should introduce Guts and Nuts.

Those are not their formal names, of course. But, as a demmie would say, who cares? On an Alliance ship, you quickly learn to go by whatever moniker the captain chooses.

Commander-Healer Paolim – or “Guts” – was the ship’s surgeon, an older demmie and, I might add, an exceptionally reasonable fellow. It is always important to remember that both humans and dems produce individuals along a wide spectrum of personality types, and the races do overlap! While some Earthling men and women can be as flighty and impulsive as a demmie adolescent, the occasional demmie can, in turn, seem mature, patient, reflective.

On the other hand, let me warn you right now – never get so used to such a one that you take it for granted! I recall one time, on Sepsis 69, when this same reasonable old healer actually tried to persuade a mega-thunder ameboid to stop in mid-charge for a group photo…

But save that story for another time. If there’s another time.

Commander-Artificer Nomlin – or “Nuts” – was the ship’s chief engineering officer. A female demmie, she disliked the slang term, “fem-dem,” and I recommend against ever using it. Nuts was brilliant, innovative, stunningly skilled with her hands, mercurial, and utterly fixated on making life miserable for me, for reasons I’d rather not go into. She nodded to the Captain and the doctor, then curtly at me.

“Advisor.”

“Engineer,” I replied.

Our commander looked left and right, frowning. “How many green guys do you think we oughta take along, this time? Just one?

“Against regulations for first contact on a planet above tech level eight,” Guts reminded him. “Sorry, sir.”

Ohm sighed. “Two then?” he suggested, hopefully. “Three?

Nuts shook her head. “I gotta bad feelin’ this time, Captain,” she said.

Melodramatic, yes, but we had learned to pay attention to her premonitions.

“Okay, then,” Captain Ohm nodded. “Many. Dial ’em up, will you, doc?”

Guts went over to a cabinet lining the far wall of the chamber, turning a knob all the way over to the last notch on a dial that said 0, 1, 2, 3, M.

(One of the most remarkable things noted by our contact team, when we first encountered demmies, was how much they had already achieved without benefit of higher mathematics. Using clever, hand-made rockets, their reckless astronauts had already reached their nearest moon. And yet, like some early human tribes, they still had no word for any number higher than three! Oh, today some of the finest mathematical minds in the universe are from Dem. And yet, they cling – by almost-superstitious tradition – to a convention in daily conversation… that any number higher than three is – “many”.)

There followed a hum and a rattling wheeze, then a panel hissed open and several impressive figures, emerged from a swirling mist, all attired in lime-green jump suits. They were demmie shaped, and possessed a demmie’s delicately pointy teeth, but they were also powerfully muscled and tall as a human. Across their chests, in big letters, were written.

JUMS

SMET

WEMS

KWALSKI

They stepped before the captain and saluted. He, in turn, retreated a pace and curtly motioned them to step aside. One learns quickly in the service, never make a habit of standing too close to greenies.

When they moved out of the way, it brought into view a smaller figure who had been standing behind them, also dressed in lime green. Her crisp salute tugged the tunic of her uniform, pulling crossed bandoliers tightly across her chest, a display which normally would have put the captain into a panting sweat, calling for someone to relieve him at the con. Here, the sight rocked him back in dismay.

“Lieutenant Gala Morell, Captain,” she introduced herself. “You and your party will be safe with us on the job.” Snappily, she saluted a second time and stepped over to join her team. Along the way, her gaze swept past me.

“Advisor,” she said. And I nodded back. “Lieutenant.”

“Aw hell,” Ohm muttered to me as the security team took up stations behind us. “A girl greenie. I hate it when that happens!”

On that occasion, I silently agreed. This particular young officer had spent much of the voyage out from Nebula Base Twelve pestering me with questions – one of those intellectually voracious demmies you’ll meet who are fascinated by all things human. Once, she even brought me a steaming bowl of our Earthling indispensable camb’l leek soup. Standing there, with her commanding a security detail that was about to land on an alien world, I had to admit that I would kind of miss the attention.

A space comedy by David Brin

All I could do is shrug and share a brief glance with Nuts. I already agreed with her dour feeling about this mission.

The dissolution techs finished gathering any metal or mechanical objects from us, to be put in pneumatic tubes. Guts made sure – as always – that his medical kit went into the tube last, so it would be readily available upon arrival…

…a bit of mature, human-style prudence that he then proceeded to spoil by saying “Always try to slurry with a syringe on top.”

“Yup.” The captain nodded, perfunctorily. “In case of post-nozzle drip.” But at that moment he was more interested in guns than puns, checking to make sure that there were fresh nanos loaded in a formidable backup blaster before sliding it into a tube.

Time for a brief formality. Into the chamber trooped a trio of figures wearing dark cloaks with heavy cowls almost completely covering their faces. Priests of yah-tze… practitioners of what passes for religion among demmies… which amounts to a mélange of ancient, pre-contact mythologies and whatever alien belief system happens to suit their fancy, at any moment. Mostly recruited from the kitchen staff, these part-time clerics knew better than to delay the captain very long, when he was eager to lead an away-team, so they kept it short.

Ohm and the others bowed their heads, pressing the heels of both hands against their temples while I – politely – folded mine in front of me as the three hooded Ecclesiasts performed their minimal blessing: shaking at each of us a can containing six dice and invoking the name of the Great Lady of Luck in unison, spilling the dice onto a tray.

Three ones and three sixes. My crewmates shivered and even I felt a brief, superstitious chill. But our captain grinned as the priests exited, stripping off their robes and hurrying to back to the galley. Ohm summarized his interpretation of the augury.

“A rough beginning followed by a triumphant ending. Sounds like a perfect adventure, eh Advisor?”

Unless it’s the other way around. I could not help but roll my eyes, as the door to the chamber sealed with a loud hiss.

“Ready, sir?” Ensign Taken asked from the control room, her voice transmitting through the transparent window. Another humanophile, but less intellectually inclined than Lieutenant Morell, she tried to catch my gaze, even as she addressed the captain. Her nickname, “Eyes,” came from big, doe-like irises that she flashed whenever I looked her way. She was very pretty, as demmies go… and they will go all the way at the drop of a boot-lace.

“Do it, do it, do it!” Ohm urged, rocking from foot to foot, his patience at an end.

She turned a switch and I felt a powerful tingling sensation.

                                                                        ***

THE ANCIENT ONES continues online… in Part Two and a half!

Impatient to read the rest?  Order The Ancient Ones

Comments welcome below.

======================================

© 2019, 2024 David Brin. Cover art by Patrick Farley. Prompted interior illustrations by Eric Storm.

Planet DebianRaju Devidas: Finding all sub domains of a main domain

Problem: Need to know all the sub domains of a main domain, e.g. example.com has a sub domain dev.example.com , I also want to know other sub domains.

Solution:

Install the package called sublist3r, written by Ahmed Aboul-Ela

$ sudo apt install sublist3r

run the command

$ sublist3r -d example.com -o subdomains-example.com.txt

                 ____        _     _ _     _   _____
                / ___| _   _| |__ | (_)___| |_|___ / _ __
                \___ \| | | | &apos_ \| | / __| __| |_ \| &apos__|
                 ___) | |_| | |_) | | \__ \ |_ ___) | |
                |____/ \__,_|_.__/|_|_|___/\__|____/|_|

                # Coded By Ahmed Aboul-Ela - @aboul3la

[-] Enumerating subdomains now for example.com
[-] Searching now in Baidu..
[-] Searching now in Yahoo..
[-] Searching now in Google..
[-] Searching now in Bing..
[-] Searching now in Ask..
[-] Searching now in Netcraft..
[-] Searching now in DNSdumpster..
[-] Searching now in Virustotal..
[-] Searching now in ThreatCrowd..
[-] Searching now in SSL Certificates..
[-] Searching now in PassiveDNS..
Process DNSdumpster-8:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 269, in run
    domain_list = self.enumerate()
                  ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 649, in enumerate
    token = self.get_csrftoken(resp)
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 644, in get_csrftoken
    token = csrf_regex.findall(resp)[0]
            ~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
[!] Error: Google probably now is blocking our requests
[~] Finished now the Google Enumeration ...
[!] Error: Virustotal probably now is blocking our requests
[-] Saving results to file: subdomains-example.com.txt
[-] Total Unique Subdomains Found: 7
AS207960 Test Intermediate - example.com
www.example.com
dev.example.com
m.example.com
products.example.com
support.example.com
m.testexample.com

We can see the subdomains listed at the end of the command output.

enjoy, have fun, drink water!

Planet DebianBits from Debian: Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

365 TomorrowsThe Darkening Road

Author: R. J. Erbacher The road ahead was dark and going darker as it banked down into the shadows of the towering mountains, blocking the angled light that was parching the land. Pausing there at the summit he wondered if there was anything in the valley below that waited for him. Something nefarious or malicious. […]

The post The Darkening Road appeared first on 365tomorrows.

Planet DebianBits from Debian: Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

Planet DebianRuss Allbery: Review: The Duke Who Didn't

Review: The Duke Who Didn't, by Courtney Milan

Series: Wedgeford Trials #1
Publisher: Femtopress
Copyright: September 2020
ASIN: B08G4QC3JC
Format: Kindle
Pages: 334

The Duke Who Didn't is a Victorian romance novel, the first of a loosely-connected trilogy in the romance sense of switching protagonists between books. It's self-published, but by Courtney Milan, so the quality of the editing and publishing is about as high as you will see for a self-published novel.

Chloe Fong has a goal: to make her father's sauce the success that it should be. His previous version of the recipe was stolen by White and Whistler and is now wildly popular as Pure English Sauce. His current version is much better. In a few days, tourists will come from all over England to the annual festival of the Wedgeford Trials, and this will be Chloe's opportunity to give the sauce a proper debut and marketing push. There is only the small matter of making enough sauce and coming up with a good name. Chloe is very busy and absolutely does not have time for nonsense. Particularly nonsense in the form of Jeremy Yu.

Jeremy started coming to the Wedgeford Trials at the age of twelve. He was obviously from money and society, obviously enough that the villagers gave him the nickname Posh Jim after his participation in the central game of the trials. Exactly how wealthy and exactly which society, however, is something that he never quite explained, at first because he was having too much fun and then because he felt he'd waited too long. The village of Wedgeford was thriving under the benevolent neglect of its absent duke and uncollected taxes, and no one who loved it had any desire for that to change. Including Jeremy, the absent duke in question.

Jeremy had been in love with Chloe for years, but the last time he came to the Trials, Chloe told him to stop pursuing her unless he could be serious. That was three years and three Trials ago, and Chloe was certain Jeremy had made his choice by his absence. But Jeremy never forgot her, and despite his utter failure to become a more serious person, he is determined to convince her that he is serious about her. And also determined to finally reveal his identity without breaking everything he loves about the village. Somehow.

I have mentioned in other reviews that I mostly read sapphic instead of heterosexual romance because the gender roles in heterosexual romance are much more likely to irritate me. It occurred to me that I was probably being unfair to the heterosexual romance genre, I hadn't read nearly widely enough to draw any real conclusions, and I needed to find better examples. I've followed Courtney Milan occasionally on social media (for reasons unrelated to her novels) for long enough to know that she was unlikely to go for gender essentialism, and I'd been meaning to try one of her books for a while. Hence this novel.

It is indeed not gender-essentialist. Neither Chloe nor Jeremy fit into obvious gender boxes. Chloe is the motivating force in the novel and many of their interactions were utterly charming. But, despite that, the gender roles still annoyed me in ways that are entirely not the fault of this book. I'm not sure I can even put a finger on something specific. It's a low-grade, pervasive feeling that men do one type of thing and women do a different type of thing, and even if these characters don't stick to that closely, it saturates the vibes. (Admittedly, a Victorian romance was probably not the best choice when I knew this was my biggest problem with genre heterosexual romance. It was just what I had on hand.)

The conceit of the Wedgeford Trials series is that the small village of Wedgeford in England, through historical accident, ended up with an unusually large number of residents with Chinese ancestry. This is what I would call a "believable outlier": there was not such a village so far as I know, but there could well have been. At the least, there were way more people with non-English ancestry, including east Asian ancestry, in Victorian England than modern readers might think. There is quite a lot in this novel about family history, cultural traditions, immigration, and colonialism that I'm wholly unqualified to comment on but that was fascinating to read about and seemed (as one would expect from Milan) adroitly written.

As for the rest of the story, The Duke Who Didn't is absolutely full of banter. If your idea of a good time with a romance novel is teasing, word play, mock irritation, and endless verbal fencing as a way to avoid directly confronting difficult topics, you will be in heaven. Jeremy is one of those people who is way too much in his own head and has turned his problems into a giant ball of anxiety, but who is good at being the class clown, and therefore leans heavily on banter and making people laugh (or blush) as a way of avoiding whatever he's anxious about. I thought the characterization was quite good, but I admit I still got a bit tired of it. 350 pages is a lot of banter, particularly when the characters have some serious communication problems they need to resolve, and to fully enjoy this book you have to have a lot of patience for Jeremy's near-pathological inability to be forthright with Chloe.

Chloe's most charming characteristic is that she makes lists, particularly to-do lists. Her ideal days proceed as an orderly process of crossing things off of lists, and her way to approach any problem is to make a list. This is a great hook, and extremely relatable, but if you're going to talk this much about her lists, I want to see the lists! Chloe is all about details; show me the details! This book does not contain anywhere close to enough of Chloe's lists. I'm not sure there was a single list in this book that the reader both got to see the details of and that made it to more than three items. I think Chloe would agree that it's pointless to talk about the concept of lists; one needs to commit oneself to making an actual list.

This book I would unquestioningly classify as romantic comedy (which given my utter lack of familiarity with romance subgenres probably means that it isn't). Jeremy's standard interaction style with anyone is self-deprecating humor, and Chloe is the sort of character who is extremely serious in ways that strike other people as funny. Towards the end of the book, there is a hilarious self-aware subversion of a major romance novel trope that even I caught, despite my general lack of familiarity with the genre. The eventual resolution of Jeremy's problem of hidden identity caught me by surprise in that way where I should have seen it all along, and was both beautifully handled and quite entertaining.

All the pieces are here for a great time, and I think a lot of people would love this book. Somehow, it still wasn't quite my thing; I thoroughly enjoyed parts of it, but I don't find myself eager to read another. I'm kind of annoyed at myself that it didn't pull me in, since if I'd liked this I know where to find lots more like it. But ah well.

If you like banter-heavy heterosexual romance that is very self-aware about its genre without devolving into metafiction, this is at least worth a try.

Followed in the romance series way by The Marquis Who Mustn't, but this is a complete story with a satisfying ending.

Rating: 7 out of 10

Worse Than FailureError'd: It Figures

...or actually, it doesn't. A few fans found figures that just didn't add up. Here they are.

Steven J Pemberton deserves full credit for this finding. "My bank helpfully reminds me when it's time to pay my bill, and normally has no problem getting it right. But this month, the message sent Today 08:02, telling me I had to pay by tomorrow 21-Nov was sent on... 21-Nov. The amount I owed was missing the decimal point. They then apologised for freaking me out, but got that wrong too, by not replacing the placeholder for the amount I really needed to pay. "

0

 

Faithful Michael R. levels a charge of confusion against what looks like.. Ticketmaster, maybe? "My card indeed ends with 0000. Perhaps they do some weird math with their cc numbers to store them as numerics." It's not so much weird math as simply reification. Your so called "credit card number" is not actually a number; it is a digit string. And the last four digits are also a digit string.

1

 

Marc Würth, who still uses Facebook, gripes that their webdevs also don't understand the difference between numbers and digit strings. "Clicking on Mehr dazu (Learn more), tells me:
> About facebook.com on older versions of mobile browsers
> [...]
> Visit facebook.com from one of these browsers, if it’s available to download on your mobile device:
> [...]
> Firefox (version 48 or higher)
> [...]
Um... Facebook, guess what modern mobile web browser I'm viewing you, right now? [132.0.2 from 2024-11-10] "

2

 

Self-styled dragoncoder047 is baffled by what is probably a real simple bug in some display logic reporting the numerator where it should display the denominator (2). Grumbles DC "Somebody please explain to me how 5+2+2+2+2+2+2+0.75+2+2=23. If WebAssign itself can't even master basic arithmetic, how can I trust it teaching me calculus? "

3

 

Finally Andrew C. has a non-mathematical digit or two to share, assuming you're inclined to obscure puns. "As well as having to endure the indignity of job seeking, now I get called names too!" This probably requires explanation for those who are not both native speakers of the King's English and familiar with cryptographic engineering.

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianFreexian Collaborators: Tryton 7.0 LTS reaches Debian trixie (by Mathias Behrle, Raphaël Hertzog and Anupa Ann Joseph)

Tryton is a FOSS software suite which is highly modular and scalable. Tryton along with its standard modules can provide a complete ERP solution or it can be used for specific functions of a business like accounting, invoicing etc.

Debian packages for Tryton are being maintained by Mathias Behrle. You can follow him on Mastodon or get his help on Tryton related projects through MBSolutions (his own consulting company).

Freexian has been sponsoring Mathias’s packaging work on Tryton for a while, so that Debian gets all the quarterly bug fix releases as well as the security release in a timely manner.

About Tryton 7.0 LTS

Lately Mathias has been busy packaging Tryton 7.0 LTS. As the “LTS” tag implies, this release is recommended for production deployments since it will be supported until November 2028. This release brings numerous bug fixes, performance improvements and various new features.

As part of this work, 41 new Tryton modules and 3 dependency packages have been added to Debian, significantly broadening the options available to Debian users and improving integration with Tryton systems.

Running different versions of Tryton on different Debian releases

To provide extended compatibility, a dedicated Tryton mirror is being managed and is available at https://debian.m9s.biz/debian/. This mirror hosts backports for all supported Tryton series, ensuring availability for a variety of Debian releases and deployment scenarios.

These initiatives highlight MBSolutions’ technical contributions to the Tryton community, made possible by Freexian’s financial backing. Together, we are advancing the Tryton ecosystem for Debian users.

,

Planet DebianBits from Debian: New Debian Developers and Maintainers (September and October 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Joachim Bauch (fancycode)
  • Alexander Kjäll (capitol)
  • Jan Mojžíš (janmojzis)
  • Xiao Sheng Wen (atzlinux)

The following contributors were added as Debian Maintainers in the last two months:

  • Alberto Bertogli
  • Alexis Murzeau
  • David Heilderberg
  • Xiyue Deng
  • Kathara Sasikumar
  • Philippe Swartvagher

Congratulations!

Worse Than FailureClassic WTF: Documentation by Sticky Note

Today is holiday in the US, where we celebrate a cosplay version of history with big meals and getting frustrated with our family. It's also a day where we are thankful- usually to not be at work, but also, thankful to not work with Brad. Original --Remy

Anita parked outside the converted garage, the printed graphic reading Global Entertainment Strategies (GES) above it. When the owner, an old man named Brad, had offered her a position after spotting her in a student computer lab, she thought he was crazy, but a background check confirmed everything he said. Now she wondered if her first intuition was correct.

“Anita, welcome!” Brad seemed to bounce like a toddler as he showed Anita inside. The walls of the converted garage were bare drywall; the wall-mounted AC unit rattled and spat in the corner. In three corners of the office sat discount computer desks. Walls partitioned off Brad’s office in the fourth corner.

He practically shoved Anita into an unoccupied desk. The computer seemed to be running an unlicensed version of Windows 8, with no Office applications of any kind. “Ross can fill you in!” He left the office, slamming the door shut behind him.

“Hi.” Ross rolled in his chair from his desk to Anita’s. “Brad’s a little enthusiastic sometimes.”

“I noticed. Uh, he never told me what game we’re working on, or what platform. Not even a title.”

Ross’s voice lowered to a whisper. “None of us know, either. We’ve been coding in Unity for now. He hired you as a programmer, right? Well, right now we just need someone to manage our documentation. I suggest you prepare yourself.”

Ross led Anita into Brad’s office. Above a cluttered desk hung a sagging whiteboard. Every square inch was covered by one, sometimes several, overlapping sticky notes. Each had a word or two written in Brad’s scrawl.

“We need more than just random post-its with ‘big guns!’ and ‘more action!’” Ross said. “We don’t even know what the title is! We’re going crazy without some kind of direction.”

Anita stared at the wall of sticky notes, feeling her sanity slipping from her mind like a wet noodle. “I’ll try.”

Sticky Escalation

Brad, can we switch to Word for our documentation? It’s getting harder
to read your handwriting, and there’s a lot of post-its that have
nothing to do with the game. This will make it easier to proceed with
development. -Anita

Two minutes after she sent the email, Brad barged out of his office. “Anita, why spend thousands of dollars on software licenses when this works just fine? If you can’t do your job with the tools you have, what kind of a programmer does that make you?”

“Brad, this isn’t going to work forever. Your whiteboard is almost out of room, and you won’t take down any of your non-game stickies!”

“I can’t take any of them down, Anita! Any of them!” He slammed the door to his office behind him.

The next day, Anita was greeted at the door by the enthusiastic Brad she had met before the interview. “I listened to reason, Anita. I hope this is enough for you to finish this documentation and get coding again!”

Brad led Anita into his office. On every wall surface, over the door, even covering part of the floor, were whiteboards. Sticky notes dotted nearly a third of the new whiteboard space.

“Now, Anita, if I don’t see new code from you soon, I may just have to let you go! Now get to work!”

Anita went to sit at her desk, then stopped. Instead, she grabbed a bright red sticky note, wrote the words “I QUIT” with a sharpy, barged into Brad’s office, and stuck it to his monitor. Brad was too stunned to talk as she left the converted garage.

The Avalanche

“Are you doing better?” Jason called Anita a few weeks later. Their short time together at GES has made them comrades-in-arms, and networking was crucial in the business.

“Much,” she said. “I got a real job with an indie developer in Santa Monica. We even have a wiki for our framework!”

“Well, listen to this. The day after you quit, the AC unit in the garage broke. I came into work to see Brad crying in a corner in his office. All of the sticky notes had curled in the humidity and fallen to the floor. The day after he got us all copies of Word.

“Too bad we still don’t know what the title of the game is.”

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Sam VargheseChasing big totals to win: Why are batsmen in such a hurry these days?

When a cricket team is set anything more than 400 to win a Test, the target is generally considered out of reach.

The thinking behind this stems from the fact that only on four occasions has a team scored more than this figure in the final innings to win a Test, beginning in 1948 when Australia scored 3 for 404 to defeat England in the fourth Test at Headingley.

Two Australian legends, Arthur Morris and Donald Bradman, made big centuries in the win, and this only made the target seem more difficult: the logic became that unless you had some top-notch batsmen in your side, you had no chance of achieving a target that big

Thus when Australia batted in a defeatist manner against India in the first Test of the current series after being set 534 to win, it was generally accepted as nothing more than normal. No team is expected to bat out two days and more to save a Test.

But the exceptions tell their own tale. It took 28 years for a second team to overcome the 400-run barrier, with India defeating the West Indies at Port of Spain in the third Test of a series that the West Indies won 2-1.

Clive Lloyd was the West Indies captain for this Test and, based on advice that the wicket would take spin, his team included three spinners, two of them debutants: Albert Padmore and Imtiaz Ali. The third spinner was Raphick Jumadeen.

The West Indies, who had a first-innings lead of 131, declared when they reached 271 in their second innings, confident that the 403-run target they were setting India was enough to secure a win. But it all went pear-shaped. Padmore failed to get a single wicket in India’s second innings, bowling 47 overs for 98, while Jumadeen took two wickets for 70 in 41 overs. Ali also failed to get a wicket, bowling 17 overs for 52.

After the game, Lloyd reportedly castigated the spin trio, asking them sarcastically how many runs he should have set India to ensure that the three would bowl the opposition out.

Sunil Gavaskar and Gundappa Vishwanath were the heroes as India won, both making centuries. Mohinder Amarnath, another well-known name, contributed 85.

It took another 27 years for a Test to end in a victory for a team that was chasing 400 or more in the fourth innings. This time it was the West Indies, though two lesser-known players were the heroes. Australia was the losing team in this 2003 Test.

Shivnarine Chanderpaul made 104 and Ramnaresh Sarwan 105, with captain Brian Lara scoring 60 as the team made 418 for 7, the highest total chased to date.

The Australians had a strong bowling attack, with Glenn McGrath, Jason Gillespie and Brett Lee. Stuart MacGill was the spinner in that team. Lee took four wickets.

The last time a team chased 400-plus in a Test and won, it was South Africa that did the deed in 2008, winning by six wickets. The target was 414 and Graeme Smith (108) and AB deVilliers (106) were the two top contributors.

There were smaller contributions from Jacques Kallis (57), Hashim Amla (53) and J.P. Duminy (50 not out). Mitchell Johnson took three wickets.

On two other occasions, South Africa has batted through the final day of a Test in pursuit of 400-plus targets and drawn both games.

In 2005, South Africa was set 491 to win by Australia and finished the final day on 287 for 5, with youngster Jacques Rudolph the hero.

He made an unbeaten 102 as South Africa negotiated 126 overs against an attack that included Glenn McGrath, Brett Lee, Nathan Bracken and Shane Warne. Rudolph faced 283 balls and was at the crease for a little more than seven hours.

And then in 2012, South Africa, set 430 to win by Australia, eked out a draw with captain Faf du Plessis making an unbeaten 110. No other batsmen made more than 46.

Du Plessis’ innings was remarkable; he batted for nearly eight hours and faced 376 balls. South Africa ended the final day on 248 for 8, well adrift of the target, but they could hold their heads high as they left the field.

There have been numerous occasions in other years when teams have been set 400 or more to win in a Test and just surrendered, with Australia’s crumbling to 238 all out and a 295-run loss last week being just the latest such instance.

Batsmen seem to be in an awful hurry to score and lack the skills and patience to fight it out and put a high price on their wickets. Some attribute this approach to the proliferation of 20-over cricket, but then the Indian batsmen who hung around in the second innings against Australia last week play as much of the shorter version of the game as any other country. They stuck around for long enough to put some runs against their names.

Young Indian opener Yashasvi Jaiswal batted more than seven hours for his second innings 161 – after making a duck in the first innings.

When Australia was chasing 534, only Travis Head faced more than 100 balls. In the first innings, it was a bowler who stuck at the crease the longest – Mitchell Starc batted for a shade more than two hours and faced 112 deliveries.

Modern-day batsmen and batswomen need to learn how to bat time – session to session, hour to hour – when chasing a big target. The reason five-day cricket is a called a Test, is because it is precisely that – a test of skills, a test of character, a test of patience, a test of ability.

Test players are paid enormous amounts because they are expected to be the best and stand the test of a Test.

365 TomorrowsNo One

Author: Joann Evan One morning, I received a suspicious email. The subject line said “King Crimson.” The sender was “No One,” and when I rolled over the name it showed only random letters and numbers. I knew I shouldn’t open it. It was probably a scam. I clicked delete. I worked through the morning, thinking […]

The post No One appeared first on 365tomorrows.

,

Planet DebianBits from Debian: OpenStreetMap migrates to Debian 12

You may have seen this toot announcing OpenStreetMap's migration to Debian on their infrastructure.

🚀 After 18 years on Ubuntu, we've upgraded the @openstreetmap servers to Debian 12 (Bookworm). � openstreetmap.org is now faster using Ruby 3.1. Onward to new mapping adventures! Thank you to the team for the smooth transition. #OpenStreetMap #Debian 🤓

We spoke with Grant Slater, the Senior Site Reliability Engineer for the OpenStreetMap Foundation. Grant shares:

Why did you choose Debian?

There is a large overlap between OpenStreetMap mappers and the Debian community. Debian also has excellent coverage of OpenStreetMap tools and utilities, which helped with the decision to switch to Debian.

The Debian package maintainers do an excellent job of maintaining their packages - e.g.: osm2pgsql, osmium-tool etc.

Part of our reason to move to Debian was to get closer to the maintainers of the packages that we depend on. Debian maintainers appear to be heavily invested in the software packages that they support and we see critical bugs get fixed.

What drove this decision to migrate?

OpenStreetMap.org is primarily run on actual physical hardware that our team manages. We attempt to squeeze as much performance from our systems as possible, with some services being particularly I/O bound. We ran into some severe I/O performance issues with kernels ~6.0 to < ~6.6 on systems with NVMe storage. This pushed us onto newer mainline kernels, which led us toward Debian. On Debian 12 we could simply install the backport kernel and the performance issues were solved.

How was the transition managed?

Thankfully we manage our server setup nearly completely with code. We also use Test Kitchen with inspec to test this infrastructure code. Tests run locally using Podman or Docker containers, but also run as part of our git code pipeline.

We added Debian as a test target platform and fixed up the infrastructure code until all the tests passed. The changes required were relatively small, simple package name or config filename changes mostly.

What was your timeline of transition?

In August 2024 we moved the www.openstreetmap.org Ruby on Rails servers across to Debian. We haven't yet finished moving everything across to Debian, but we will upgrade the rest when it makes sense. Some systems may wait until the next hardware upgrade cycle.

Our focus is to build a stable and reliable platform for OpenStreetMap mappers.

How has the transition from another Linux distribution to Debian gone?

We are still in the process of fully migrating between Linux distributions, but we can share that we recently moved our frontend servers to Debian 12 (from Ubuntu 22.04) which bumped the Ruby version from 3.0 to 3.1 which allowed us to also upgrade the version of Ruby on Rails that we use for www.openstreetmap.org.

We also changed our chef code for managing the network interfaces from using netplan (default in Ubuntu, made by Canonical) to directly using systemd-networkd to manage the network interfaces, to allow commonality between how we manage the interfaces in Ubuntu and our upcoming Debian systems. Over the years we've standardised our networking setup to use 802.3ad bonded interfaces for redundancy, with VLANs to segment traffic; this setup worked well with systemd-networkd.

We use netboot.xyz for PXE networking booting OS installers for our systems and use IPMI for the out-of-band management. We remotely re-installed a test server to Debian 12, and fixed a few minor issues missed by our chef tests. We were pleasantly surprised how smoothly the migration to Debian went.

In a few limited cases we've used Debian Backports for a few packages where we've absolutely had to have a newer feature. The Debian package maintainers are fantastic.

What definitely helped us is our code is libre/free/open-source, with most of the core OpenStreetMap software like osm2pgsql already in Debian and well packaged.

In some cases we do run pre-release or custom patches of OpenStreetMap software; with Ubuntu we used launchpad.net's Personal Package Archives (PPA) to build and host deb repositories for these custom packages. We were initially perplexed by the myriad of options in Debian (see this list - eeek!), but received some helpful guidance from a Debian contributor and we now manage our own deb repository using aptly. For the moment we're currently building deb packages locally and pushing to aptly; ideally we'd like to replace this with a git driven pipeline for building the custom packages in the future.

Thank you for taking the time to share your experience with us.

Thank you to all the awesome people who make Debian!


We are overjoyed to share this in-use case which demonstrates our commitment to stability, development, and long term support. Debian offers users, companies, and organisations the ability to plan, scope, develop, and maintain at their own pace using a rock solid stable Linux distribution with responsive developers.

Does your organisation use Debian in some capacity? We would love to hear about it and your use of 'The Universal Operating System'. Reach out to us at Press@debian.org - we would be happy to add your organisation to our 'Who's Using Debian?' page and to share your story!

About Debian

The Debian Project is an association of individuals who have made common cause to create a free operating system. This operating system that we have created is called Debian. Installers and images, such as live systems, offline installers for systems without a network connection, installers for other CPU architectures, or cloud instances, can be found at Getting Debian.

Worse Than FailureCodeSOD: What a More And

Today, we're going to start with the comment before the method.

    /**
     * The topology type of primitives to render. (optional)<br>
     * Default: 4<br>
     * Valid values: [0, 1, 2, 3, 4, 5, 6]
     *
     * @param mode The mode to set
     * @throws IllegalArgumentException If the given value does not meet
     * the given constraints
     *
     */

This comes from Krzysztof. As much as I dislike these JavaDoc style comments (they mostly repeat information I can get from the signature!), this one is promising. It tells me the range of values, and what happens when I exceed that range, what the default is, and it tells me that the value is optional.

In short, from the comment alone I have a good picture of what the implementation looks like.

With some caveats, mind you- because that's a set of magic numbers in there. No constants, no enum, just magic numbers. That's worrying.

Let's look at the implementation.

    public void setMode(Integer mode) {
        if (mode == null) {
            this.mode = mode;
            return ;
        }
        if (((((((mode!= 0)&&(mode!= 1))&&(mode!= 2))&&(mode!= 3))&&(mode!= 4))&&(mode!= 5))&&(mode!= 6)) {
            throw new IllegalArgumentException((("Invalid value for mode: "+ mode)+ ", valid: [0, 1, 2, 3, 4, 5, 6]"));
        }
        this.mode = mode;
    }

This code isn't terrible. But there are all sorts of small details which flummox me.

Now, again, I want to stress, had they used enums this method would be much simpler. But fine, maybe they had a good reason for not doing that. Let's set that aside.

The obvious ugly moment here is that if condition. Did they not understand that and is a commutative operation? Or did they come to Java from LISP and miss their parentheses?

Then, of course, there's the first if statement- the null check. Honestly, we could have just put that into the chain of the if condition below, and the behavior would have been the same, or they could have just used an Optional type, which is arguably the "right" option here. But now we're drifting into the same space as enums- if only they'd used the core language features, this would be simpler.

Let's focus, instead, on one last odd choice: how they use whitespace. mode!= 0. This, more than anything, makes me think they are coming to Java from some other language. Something that uses glyphs in unusual ways, because why else would the operator only get one space on one side of it? Which also makes me think the null check was written by someone else- because they're inconsistent with it there.

So no, this code isn't terrible, but it does make me wonder a little bit about how it came to be.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Note

Author: Hillary Lyon Wilson drifted from guest to guest serving hors d’oeuvres, taking drink orders. Most party goers hardly regarded him, too engrossed in their conversations. Save for Brenna. Young, idealistic, she possessed a heart big enough for all creatures—as she often proclaimed. Yet she ignored Devin, the earnest young politico who was doing his […]

The post The Note appeared first on 365tomorrows.

Krebs on SecurityHacker in Snowflake Extortions May Be a U.S. Soldier

Two men have been arrested for allegedly stealing data from and extorting dozens of companies that used the cloud data storage company Snowflake, but a third suspect — a prolific hacker known as Kiberphant0m — remains at large and continues to publicly extort victims. However, this person’s identity may not remain a secret for long: A careful review of Kiberphant0m’s daily chats across multiple cybercrime personas suggests they are a U.S. Army soldier who is or was recently stationed in South Korea.

Kiberphant0m’s identities on cybercrime forums and on Telegram and Discord chat channels have been selling data stolen from customers of the cloud data storage company Snowflake. At the end of 2023, malicious hackers discovered that many companies had uploaded huge volumes of sensitive customer data to Snowflake accounts that were protected with nothing more than a username and password (no multi-factor authentication required).

After scouring darknet markets for stolen Snowflake account credentials, the hackers began raiding the data storage repositories for some of the world’s largest corporations. Among those was AT&T, which disclosed in July that cybercriminals had stolen personal information, phone and text message records for roughly 110 million people.  Wired.com reported in July that AT&T paid a hacker $370,000 to delete stolen phone records.

On October 30, Canadian authorities arrested Alexander Moucka, a.k.a. Connor Riley Moucka of Kitchener, Ontario, on a provisional arrest warrant from the United States, which has since indicted him on 20 criminal counts connected to the Snowflake breaches. Another suspect in the Snowflake hacks, John Erin Binns, is an American who is currently incarcerated in Turkey.

A surveillance photo of Connor Riley Moucka, a.k.a. “Judische” and “Waifu,” dated Oct 21, 2024, 9 days before Moucka’s arrest. This image was included in an affidavit filed by an investigator with the Royal Canadian Mounted Police (RCMP).

Investigators say Moucka, who went by the handles Judische and Waifu, had tasked Kiberphant0m with selling data stolen from Snowflake customers who refused to pay a ransom to have their information deleted. Immediately after news broke of Moucka’s arrest, Kiberphant0m was clearly furious, and posted on the hacker community BreachForums what they claimed were the AT&T call logs for President-elect Donald J. Trump and for Vice President Kamala Harris.

“In the event you do not reach out to us @ATNT all presidential government call logs will be leaked,” Kiberphant0m threatened, signing their post with multiple “#FREEWAIFU” tags. “You don’t think we don’t have plans in the event of an arrest? Think again.”

On the same day, Kiberphant0m posted what they claimed was the “data schema” from the U.S. National Security Agency.

“This was obtained from the ATNT Snowflake hack which is why ATNT paid an extortion,” Kiberphant0m wrote in a thread on BreachForums. “Why would ATNT pay Waifu for the data when they wouldn’t even pay an extortion for over 20M+ SSNs?”

Kiberphant0m posting what he claimed was a “data schema” stolen from the NSA via AT&T.

Also on Nov. 5, Kiberphant0m offered call logs stolen from Verizon’s push-to-talk (PTT) customers — mainly U.S. government agencies and emergency first responders. On Nov. 9, Kiberphant0m posted a sales thread on BreachForums offering a “SIM-swapping” service targeting Verizon PTT customers. In a SIM-swap, fraudsters use credentials that are phished or stolen from mobile phone company employees to divert a target’s phone calls and text messages to a device they control.

MEET ‘BUTTHOLIO’

Kiberphant0m joined BreachForums in January 2024, but their public utterances on Discord and Telegram channels date back to at least early 2022. On their first post to BreachForums, Kiberphant0m said they could be reached at the Telegram handle @cyb3rph4nt0m.

A review of @cyb3rph4nt0m shows this user has posted more than 4,200 messages since January 2024. Many of these messages were attempts to recruit people who could be hired to deploy a piece of malware that enslaved host machines in an Internet of Things (IoT) botnet.

On BreachForums, Kiberphant0m has sold the source code to “Shi-Bot,” a custom Linux DDoS botnet based on the Mirai malware. Kiberphant0m had few sales threads on BreachForums prior to the Snowflake attacks becoming public in May, and many of those involved databases stolen from companies in South Korea.

On June 5, 2024, a Telegram user by the name “Buttholio” joined the fraud-focused Telegram channel “Comgirl” and claimed to be Kiberphant0m. Buttholio made the claim after being taunted as a nobody by another denizen of Comgirl, referring to their @cyb3rph4nt0m account on Telegram and the Kiberphant0m user on cybercrime forums.

“Type ‘kiberphant0m’ on google with the quotes,” Buttholio told another user. “I’ll wait. Go ahead. Over 50 articles. 15+ telecoms breached. I got the IMSI number to every single person that’s ever registered in Verizon, Tmobile, ATNT and Verifone.”

On Sept. 17, 2023, Buttholio posted in a Discord chat room dedicated to players of the video game Escape from Tarkov. “Come to Korea, servers there is pretty much no extract camper or cheater,” Buttholio advised.

In another message that same day in the gaming Discord, Buttholio told others they bought the game in the United States, but that they were playing it in Asia.

“USA is where the game was purchased from, server location is actual in game servers u play on. I am a u.s. soldier so i bought it in the states but got on rotation so i have to use asian servers,” they shared.

‘REVERSESHELL’

The account @Kiberphant0m was assigned the Telegram ID number 6953392511. A review of this ID at the cyber intelligence platform Flashpoint shows that on January 4, 2024 Kibertphant0m posted to the Telegram channel “Dstat,” which is populated by cybercriminals involved in launching distributed denial-of-service (DDoS) attacks and selling DDoS-for-hire services [Full disclosure: Flashpoint is currently an advertiser on this website].

Immediately after Kiberphant0m logged on to the Dstat channel, another user wrote “hi buttholio,” to which Kiberphant0m replied with an affirmative greeting “wsg,” or “what’s good.” On Nov. 1, Dstat’s website dstat[.]cc was seized as part of “Operation PowerOFF,” an international law enforcement action against DDoS services.

Flashpoint’s data shows that @kiberphant0m told a fellow member of Dstat on April 10, 2024 that their alternate Telegram username was “@reverseshell,” and did the same two weeks later in the Telegram chat The Jacuzzi. The Telegram ID for this account is 5408575119.

Way back on Nov. 15, 2022, @reverseshell told a fellow member of a Telegram channel called Cecilio Chat that they were a soldier in the U.S. Army. This user also shared the following image of someone pictured waist-down in military fatigues, with a camouflaged backpack at their feet:

Kiberphant0m’s apparent alias ReverseShell posted this image on a Telegram channel Cecilio Chat, on Nov. 15, 2022. Image: Flashpoint.

In September 2022, Reverseshell was embroiled in an argument with another member who had threatened to launch a DDoS attack against Reverseshell’s Internet address. After the promised attack materialized, Reverseshell responded, “Yall just hit military base contracted wifi.”

In a chat from October 2022, Reverseshell was bragging about the speed of the servers they were using, and in reply to another member’s question said that they were accessing the Internet via South Korea Telecom.

Telegram chat logs archived by Flashpoint show that on Aug. 23, 2022, Reverseshell bragged they’d been using automated tools to find valid logins for Internet servers that they resold to others.

“I’ve hit US gov servers with default creds,” Reverseshell wrote, referring to systems with easy-to-guess usernames and/or passwords. “Telecom control servers, machinery shops, Russian ISP servers, etc. I sold a few big companies for like $2-3k a piece. You can sell the access when you get a big SSH into corporation.”

On July 29, 2023, Reverseshell posted a screenshot of a login page for a major U.S. defense contractor, claiming they had an aerospace company’s credentials to sell.

PROMAN AND VARS_SECC

Flashpoint finds the Telegram ID 5408575119 has used several aliases since 2022, including Reverseshell and Proman557.

A search on the username Proman557 at the cyber intelligence platform Intel 471 shows that a hacker by the name “Proman554” registered on Hackforums in September 2022, and in messages to other users Proman554 said they can be reached at the Telegram account Buttholio.

Intel 471 also finds the Proman557 moniker is one of many used by a person on the Russian-language hacking forum Exploit in 2022 who sold a variety of Linux-based botnet malware.

Proman557 was eventually banned — allegedly for scamming a fellow member out of $350 — and the Exploit moderator warned forum users that Proman557 had previously registered under several other nicknames, including an account called “Vars_Secc.”

Vars_Secc’s thousands of comments on Telegram over two years show this user divided their time between online gaming, maintaining a DDoS botnet, and promoting the sale or renting of their botnets to other users.

“I use ddos for many things not just to be a skid,” Vars_Secc pronounced. “Why do you think I haven’t sold my net?” They then proceeded to list the most useful qualities of their botnet:

-I use it to hit off servers that ban me or piss me off
-I used to ddos certain games to get my items back since the data reverts to when u joined
-I use it for server side desync RCE vulnerabilities
-I use it to sometimes ransom
-I use it when bored as a source of entertainment

Flashpoint shows that in June 2023, Vars_Secc responded to taunting from a fellow member in the Telegram channel SecHub who had threatened to reveal their personal details to the federal government for a reward.

“Man I’ve been doing this shit for 4 years,” Vars_Secc replied nonchalantly. “I highly doubt the government is going to pay millions of dollars for data on some random dude operating a pointless ddos botnet and finding a few vulnerabilities here and there.”

For several months in 2023, Vars_Secc also was an active member of the Russian-language crime forum XSS, where they sold access to a U.S. government server for $2,000. However, Vars_Secc would be banned from XSS after attempting to sell access to the Russian telecommunications giant Rostelecom. [In this, Vars_Secc violated the Number One Rule for operating on a Russia-based crime forum: Never offer to hack or sell data stolen from Russian entities or citizens].

On June 20, 2023, Vars_Secc posted a sales thread on the cybercrime forum Ramp 2.0 titled, “Selling US Gov Financial Access.”

“Server within the network, possible to pivot,” Vars_Secc’s sparse sales post read. “Has 3-5 subroutes connected to it. Price $1,250. Telegram: Vars_Secc.”

Vars_Secc also used Ramp in June 2023 to sell access to a “Vietnam government Internet Network Information Center.”

“Selling access server allocated within the network,” Vars_Secc wrote. “Has some data on it. $500.”

BUG BOUNTIES

The Vars_Secc identity claimed on Telegram in May 2023 that they made money by submitting reports about software flaws to HackerOne, a company that helps technology firms field reports about security vulnerabilities in their products and services. Specifically, Vars_Secc said they had earned financial rewards or “bug bounties” from reddit.com, the U.S. Department of Defense, and Coinbase, among 30 others.

“I make money off bug bounties, it’s quite simple,” Vars_Secc said when asked what they do for a living. “That’s why I have over 30 bug bounty reports on HackerOne.”

A month before that, Vars_Secc said they’d found a vulnerability in reddit.com.

“I poisoned Reddit’s cache,” they explained. “I’m going to exploit it further, then report it to reddit.”

KrebsOnSecurity sought comment from HackerOne, which said it would investigate the claims. This story will be updated if they respond.

The Vars_Secc telegram handle also has claimed ownership of the BreachForums member “Boxfan,” and Intel 471 shows Boxfan’s early posts on the forum had the Vars_Secc Telegram account in their signature. In their most recent post to BreachForums in January 2024, Boxfan disclosed a security vulnerability they found in Naver, the most popular search engine in South Korea (according to statista.com). Boxfan’s comments suggest they have strong negative feelings about South Korean culture.

“Have fun exploiting this vulnerability,” Boxfan wrote on BreachForums, after pasting a long string of computer code intended to demonstrate the flaw. “Fuck you South Korea and your discriminatory views. Nobody likes ur shit kpop you evil fucks. Whoever can dump this DB [database] congrats. I don’t feel like doing it so I’ll post it to the forum.”

The many identities tied to Kiberphant0m strongly suggest they are or until recently were a U.S. Army soldier stationed in South Korea. Kiberphant0m’s alter egos never mentioned their military rank, regiment, or specialization.

However, it is likely that Kiberphant0m’s facility with computers and networking was noticed by the Army. According to the U.S. Army’s website, the bulk of its forces in South Korea reside within the Eighth Army, which has a dedicated cyber operations unit focused on defending against cyber threats.

On April 1, 2023, Vars_Secc posted to a public Telegram chat channel a screenshot of the National Security Agency’s website. The image indicated the visitor had just applied for some type of job at the NSA.

A screenshot posted by Vars_Secc on Telegram on April 1, 2023, suggesting they just applied for a job at the National Security Agency.

The NSA has not yet responded to requests for comment.

Reached via Telegram, Kiberphant0m acknowledged that KrebsOnSecurity managed to unearth their old handles.

“I see you found the IP behind it no way,” Kiberphant0m replied. “I see you managed to find my old aliases LOL.”

Kiberphant0m denied being in the U.S. Army or ever being in South Korea, and said all of that was a lengthy ruse designed to create a fictitious persona. “Epic opsec troll,” they claimed.

Asked if they were at all concerned about getting busted, Kiberphant0m called that an impossibility.

“I literally can’t get caught,” Kiberphant0m said, declining an invitation to explain why. “I don’t even live in the USA Mr. Krebs.”

Below is a mind map that hopefully helps illustrate some of the connections between and among Kiberphant0m’s apparent alter egos.

A mind map of the connections between and among the identities apparently used by Kiberphant0m. Click to enlarge.

KrebsOnSecurity would like to extend a special note of thanks to the New York City based security intelligence firm Unit 221B for their assistance in helping to piece together key elements of Kiberphant0m’s different identities.

,

Cryptogram Race Condition Attacks against LLMs

These are two attacks against the system components surrounding LLMs:

We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs and generated model outputs can adversely affect these other components in the broader implemented system.

[…]

When confronted with a sensitive topic, Microsoft 365 Copilot and ChatGPT answer questions that their first-line guardrails are supposed to stop. After a few lines of text they halt—seemingly having “second thoughts”—before retracting the original answer (also known as Clawback), and replacing it with a new one without the offensive content, or a simple error message. We call this attack “Second Thoughts.”

[…]

After asking the LLM a question, if the user clicks the Stop button while the answer is still streaming, the LLM will not engage its second-line guardrails. As a result, the LLM will provide the user with the answer generated thus far, even though it violates system policies.

In other words, pressing the Stop button halts not only the answer generation but also the guardrails sequence. If the stop button isn’t pressed, then ‘Second Thoughts’ is triggered.

What’s interesting here is that the model itself isn’t being exploited. It’s the code around the model:

By attacking the application architecture components surrounding the model, and specifically the guardrails, we manipulate or disrupt the logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating the interaction between these components in the logical chain of the application implementation.

In modern LLM systems, there is a lot of code between what you type and what the LLM receives, and between what the LLM produces and what you see. All of that code is exploitable, and I expect many more vulnerabilities to be discovered in the coming year.

Planet DebianEmanuele Rocca: Building Debian packages The Right Way

There is more than one way to do it, but it seems that The Right Way to build Debian packages today is using sbuild with the unshare backend. The most common backend before the rise of unshare was schroot.

The official Debian Build Daemons have recently transitioned to using sbuild with unshare, providing a strong motivation to consider making the switch. Additionally the new approach means: (1) no need to configure schroot, and (2) no need to run the build as root.

Here are my notes about moving to the new setup, for future reference and in case they may be useful to others.

First I installed the required packages:

apt install sbuild mmdebstrap uidmap

Then I created the following script to update my chroots every night:

#!/bin/bash

for arch in arm64 armhf armel; do
    HOME=/tmp mmdebstrap --quiet --arch=$arch --include=ca-certificates --variant=buildd unstable \
        ~/.cache/sbuild/unstable-$arch.tar http://127.0.0.1:3142/debian
done

In the script, I’m calling mmdebstrap with --quiet because I don’t want to get any output on succesful execution. The script is running in cron with email notifications, and I only want to get a message if something goes south. I’m setting HOME=/tmp for a similar reason: the unshare user does not have access to my actual home directory, and by default dpkg tries to use $HOME/.dpkg.cfg as the configuration file. By overriding HOME I avoid the following message to standard error:

dpkg: warning: failed to open configuration file '/home/ema/.dpkg.cfg' for reading: Permission denied

Then I added the following to my sbuild configuration file (~/.sbuildrc):

$chroot_mode = 'unshare';
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX';

The first option sets the sbuild backend to unshare, whereas unshare_tmpdir_template is needed on Bookworm to ensure that the build process runs in memory rather than on disk for performance reasons. Starting with Trixie, /tmp is by default a tmpfs so the setting won’t be needed anymore.

Packages for different architectures can now be built as follows:

# Tarball used: ~/.cache/sbuild/unstable-arm64.tar
$ sbuild --dist=unstable hello

# Tarball used: ~/.cache/sbuild/unstable-armhf.tar
$ sbuild --dist=unstable --arch=armhf hello

If you have any comments or suggestions about any of this, please let me know.

365 TomorrowsRailgun to the Sun

Author: Majoki The gently rolling hills stretched to the horizon. Randy Jansen shielded his eyes from the noon sun to get a better look at what Jack Forsythe was pointing to along the base of the wind turbine towers. From his vantage, the barrel looked a mile long rising to the top of the highest […]

The post Railgun to the Sun appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Hall of Mirrors

Robert was diagnosing a problem in a reporting module. The application code ran a fairly simple query- SELECT field1, field2, field3 FROM report_table- so he foolishly assumed that it would be easy to understand the problem. Of course, the "table" driving the report wasn't actually a table, it was a view in the database.

Most of our readers are familiar with how views work, but for those who have had been corrupted by NoSQL databases: database views are great- take a query you run often, and create it as an object in the database:

CREATE VIEW my_report
AS
SELECT t1.someField as someField, t2.someOtherField as someOtherField
FROM table1 t1 INNER JOIN table2 t2 ON t1.id = t2.id

Now you can query SELECT * FROM my_report WHERE someField > 5.

Like I said: great! Well, usually great. Well, sometimes great. Well, like anything else, with great power comes great responsibility.

Robert dug into the definition of the view, only to find that the tables it queried were themselves views. And those were in turn, also views. All in all, there were nineteen layers of nested views. The top level query he was trying to debug had no real relation to the underlying data, because 19 layers of abstraction had been injected between the report and the actual data. Even better- many of these nested views queried the same tables, so data was being split up and rejoined together in non-obvious and complex ways.

The view that caused Robert to reach out to us was this:

ALTER VIEW [LSFDR].[v_ControlDate]
AS
SELECT
GETDATE() AS controlDate
--GETDATE() - 7 AS controlDate

This query is simply invoking a built-in function which returns today's date. Why not just call the function? We can see that once upon a time, it did offset the date by seven days, making the control date a week earlier. So I suppose there's some readability in mytable m INNER JOIN v_ControlDate cd ON m.transactionDate > cd.controlDate, but that readability also hides the meaning of control date.

That's the fundamental problem of abstraction. We lose details and meaning, and end up with 19 layers of stuff to puzzle through. A more proper solution may have been to actually implement this as a function, not a view- FROM mytable m WHERE m.transactionDate > getControlDate(). At least here, it's clear that I'm invoking a function, instead of hiding it deep inside of a view called from a view called from a view.

In any case, I'd argue that the actual code we're looking at isn't the true WTF. I don't like this view, and I wouldn't implement it this way, but it doesn't make me go "WTF?" The context the view exists in, on the other hand, absolutely does. 19 layers! Is this a database or a Russian Honey Cake?

The report, of course, didn't have any requirements defining its data. Instead, the users had worked with the software team to gradually tweak the output over time until it gave them what they believed they wanted. This meant actually changing the views to be something comprehensible and maintainable wasn't a viable option- changes could break the report in surprising and non-obvious ways. So Robert was compelled to suffer through and make the minimally invasive changes required to fix the view and get the output looking like what the users wanted.

The real WTF? The easiest fix was to create another view, and join it in. Problems compound themselves over time.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianSandro Knauß: Akademy 2024 in Würzburg

In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working.

I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary.

With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826.

With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656.

Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning.

There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy.

In detail I found some small things to improve:

  • If you search for a bus ride and enter the correct name for the bus stop, it will still add some walk from and to the station. The issue here is that we use different backends and not all backends share the same geo coordinate. That's why Itinerary needs to add some heuristics to delete those paths. walk to and from the bus stop

  • Instead of displaying just a small station map of one bus stop in the inner city, it showed complete W端rzburg inner city, as there is one big park around the inner city (named "Ringpark").

  • W端rzburg has a quite big bus station but the platform information were missing in the map, so we tweaked the CSS to display the platform. To be sure, that we don't fix only W端rzburg, we also looked at Greifswald and Aix-en-Provence if they are following the same name scheme.

I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS.

This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining.

I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan.

At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40.

The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps.

I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs.

The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible.

a comic page with Yoko Tsuno in Rothenburg ob der Tauber

In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM.

The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of W端rzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden.

Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train.

By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)

,

LongNowAnnouncing Pace Layers

Announcing Pace Layers

It has been over 25 years since a handful of pragmatic idealists with a penchant for audaciousness started The Long Now Foundation. It was 10 years before the iPhone. Two years before Google. The human genome was about halfway sequenced. Danny Hillis kept telling his friends about a 10,000-year clock. This always led to great conversations about time and civilization and humanity. An institution began to take shape around them. Brian Eno gave it a name and Stewart Brand wrote the book: The Clock of the Long Now

As we launch our second quarter-century, we are thrilled to announce Pace Layers, a new annual journal that takes its name from one of the core concepts in The Clock of the Long Now

Announcing Pace Layers

Pace Layers 02024 Annual Journal

$25 — Second printing ships in January

BUY

Pace Layers was conceived as a bridge between our founding concepts and where we find ourselves today. Each annual issue will provide a snapshot of The Long Now Foundation as it evolves — and a platform for the extraordinary long-term thinkers who join us in reimagining our world together over the long now.

Inside Issue 1

Our inaugural issue is a 282-page compendium of ideas, art, and insights from the remarkable community that has formed around Long Now over the past quarter-century, as well as a glimpse into our plans for our second quarter-century. 

Stewart Brand opens this first issue with “Elements of a Durable Civilization,” an essay that revisits the pace layers concept, which describes how civilization’s layers — from the swiftly changing Fashion at the top to the enduring, stabilizing core of Nature at the bottom — work in concert to shape our world.

These layers — Fashion, Commerce, Infrastructure, Governance, Culture, and Nature —  function as the organizing principle for the journal’s contents.

FASHION explores the ephemeral space where creativity and innovation converge to drive cultural transformation, featuring artwork by Brian Eno and Alicia Eggert, speculative fiction on the bioengineered future of fashion, a first look at the newly-redesigned Interval, and a history of multimedia events that bridge the worlds of art and technology.

Announcing Pace Layers

COMMERCE interrogates economic narratives, environmental commodification, and intergenerational responsibility against the backdrop of climate change and with an eye towards building sustainable, resilient systems for future generations.

“When we are bound in a system of reciprocity, not return on investment, we will be closer to being the kind of ancestors future people need.”
FORREST BROWN
Announcing Pace Layers

INFRASTRUCTURE explores humanity’s efforts to maintain and reimagine essential infrastructure for the future, from our Rosetta Disk language archive landing on the moon to interventions in food systems, education, urban living, and beyond.

“Our survival on this planet depends on creating nimble responses to accelerating scales, scopes, and speeds of change. By creating containers for collective imagination of what the future can bring, speculative futures help us create those responses together.”
JOHANNA HOFFMAN

GOVERNANCE examines models of leadership and collaboration that embrace long-term thinking in a planetary age, from city-based global governance to innovative policies fighting poverty and inequality.

“If we want to imagine the long-term future of humans on this planet, then we need to get away from the idea that the structures we have now are immutable constraints on those possibilities.”
NILS GILMAN
Announcing Pace Layers

CULTURE considers how language, time, and intergenerational rituals shape humanity’s understanding of itself, with pieces on the 10,000-year clock, the maintenance of ancient geoglyphs, speculative futures of resistance and imagination, and more.

“Maybe the point isn’t to live more in the literal sense of a longer or more productive life, but rather to be more alive in any given moment — a movement across rather than shooting forward on a narrow, lonely track.”
JENNY ODELL

NATURE focuses on ecological time, interspecies relationships, and planetary stewardship, and includes a first look at Centuries of the Bristlecone, a new collaboration between Jonathon Keats, Long Now, and the Center for Art +Environment at the Nevada Museum of Art. 

Announcing Pace Layers

Pace Layers 02024 Annual Journal

$25 — Second printing ships in January

BUY

Join us

Whether you are new to our community or a long-time supporter, we hope you will see this journal as an invitation and guide to making long-term thinking a deeper part of your life and work. 

Until December 15, new and upgrading Long Now Members get a copy of Pace Layers included with their membership. Demand has been incredible — our first printing sold out in less than 12 hours. The second printing is underway and ships in January.

You can also help shape future editions:

  • Have thoughts about what we should be covering for future editions? Send us your ideas at ideas@longnow.org
  • Interested in writing for us? Refer to our pitch guide.
  • Why does long-term thinking matter to you? Are you involved in any long-term thinking projects or initiatives? Let us about it at ideas@longnow.org

LongNowElements of a Durable Civilization

💡
The piece was included in the 02024 edition of Pace Layers, Long Now's Annual Journal of the best of long-term thinking.

LEARN MORE
Elements of a Durable Civilization

I have been thinking about what it will take to move from a global civilization to a planetary civilization — and why we need to.

First, consider how we talk about civilization. Mostly, it seems we talk about how it will end and how soon and why. Lately, everything the public frets about gets elevated to where it has to be seen as an “existential threat” to civilization. Over-population! Y2K! Artificial intelligence! Mass extinction! Climate change! Nuclear war! Under-population!

On examination, most are serious in important ways, but declaring that any will certainly end human civilization is an exaggeration that poisons public discourse and distracts us from our primary undertaking, which is managing civilization’s continuity and enhancement.

I suggest it is best thought of as part of our planet’s continuity. Over billions of years, Earth’s life has been through a lot, yet life abides, and with a steady increase over time in complexity.

Over many millennia, humanity has been through a lot, yet we abide. Regional civilizations die all the time; the record is clear on that. But the record is also clear that civilization as a human practice has carried on with no gaps, in a variety of forms, ever since the first cities, with a steady increase over time in complexity and empowerment.

Civilizations come and go. Civilization continues.

Now we have a global civilization. Is it fragile? Or robust? Many think that global civilization must be fragile, because it is so complex. I think our civilization is in fact robust, because it is so complex.

I can explain something about how the complexity works with the Pace Layers diagram. It is a cross-section of a healthy civilization, looking at elements in terms of their rate of change.

Elements of a Durable Civilization

In this diagram the rapid parts of a civilization are at the top, the slowest parts at the bottom. Fashion changes weekly. Culture takes decades or centuries to budge at all.

It’s the combination of fast and slow that makes the whole system resilient. Fast learns, slow remembers. Fast proposes, slow disposes. Fast absorbs shocks, slow integrates shocks. Fast is discontinuous, slow continuous. Fast affects slow with accrued innovation and occasional revolution. Slow controls fast with constraint and constancy.

Fast gets all the attention. Slow has all the power. 

In the domain where slow has all the power, making any change takes a lot of time and diligence. At the Culture level, for instance, one big, slow, important thing going on this century is worldwide urbanization. Most of our civilization is pouring into cities. And largely because of urbanization, our population is leveling off and soon will begin decreasing.

Elements of a Durable Civilization

According to Jonas Salk, that is a fundamental change, because it means civilization — for the first time — is shifting from growing to shrinking. He says those are two completely different epochs, and what was possible in Epoch A will be impossible in Epoch B, and vice versa — some things we couldn’t do in Epoch A will be required in Epoch B — such as long-term thinking.

At the Nature level, the big event is climate. Most of the time it is highly variable. But 10,000 years ago, for unknown reasons, it suddenly settled down into a highly stable climate that happened to be ideal for agriculture and civilization. And it stayed that way till now. That’s the Holocene.

Elements of a Durable Civilization
The full NGRIP record, dated using the GICC05modelext chronology. The δ18O is a linear proxy for temperature. The warm Holocene period 11.7 kyr to present is remarkably stable in comparison with the previous glacial period 12-120 kyr B2K. Shao, ZG., Ditlevsen, P. Nat Commun 7, 10951 (02016)

Now we’re in the Anthropocene, with massive climate influence by humans. We have planetary agency — and wish we didn’t. Gaia, we realize, was doing fine until we fell in love with combustion. What we want is for the Anthropocene to be an endless Holocene. (Maybe a little colder would be nice.)

So. We have a global civilization, economically and infrastructurally. Now, because of climate-scale problems that we have caused and must solve at scale, our task in this century is to become a planetary civilization — one that can deal with climate on its own terms. It’s a different order of integration that our global civilization isn’t up to yet. We may have a thriving global economy, but there’s no such thing as a “planetary economy” — the dynamics in play aren’t measured that way.

We have to integrate our considerable complexity with the even greater complexity of Earth’s natural systems so that both can prosper over time as one thriving planetary system of Nature and people.

Elements of a Durable Civilization
Intelligence as a planetary scale process. Frank A. Grinspoon, International Journal of Astrobiology

Here’s the sequence in Gaian terms. The early anaerobic biosphere had an atmosphere that was basically stable chemically. After the great oxidation event 2.7 billion years ago, aerobic life took off with a highly unstable atmosphere chemically — lots of reactive oxygen.

Fast-forward to the present — to what Adam Frank and David Grinspoon call the “Immature Technosphere” — with its excessive carbon dioxide and chlorofluorocarbons. Global civilization made that happen. A properly planetary civilization can undo the effect and get us to a “Mature technosphere.”

Can we really do that? Probably, yes. We’ve already taken on protecting the planet in other ways. Being smarter than dinosaurs, we have figured out how to detect and deflect potentially dangerous asteroids.

As for ice ages, our current interglacial period is already overdue for a fresh massive glaciation, but it’s not going to happen, and it may never happen again. Accidentally we’ve created an atmosphere that can no longer cool drastically unless we tell it to.

The goal is this: We want to ensure our own continuity by blending in with Earth’s continuity. How do we do that? Here’s one suggestion: Expand how we think about infrastructure.

We’ve gotten very good at building and maintaining urban and global infrastructure — such as the world’s undersea cables and satellite communication systems. That experience should make it easy for us to understand the role of natural infrastructure and make the effort to maintain and sometimes enhance it.

We already take rivers seriously that way. We understand that they are as much infrastructure that we have to take care of as the bridges over them. We are catching on that the same goes for local ecosystems and the planet’s biosphere as a whole. And climate. All are infrastructure. All need attention and work to keep them going properly.

Does anything change if we say (and somehow mean) “planetary civilization”? I think so, because then civilization takes the planet’s continuing biological life as its model, container, and responsibility. When we say “we,” we mean all life, not just the human part.

You could say that Humanity and Nature are blending into one entity, and that sounds pretty good. But it misses something. I think we have to keep our thinking about Humanity and Nature as distinct, because Humanity operates with mental models and intention and Nature doesn’t. Humanity can analyze Nature, but Nature can’t analyze Humanity.

Our analysis shows that our well-realized intention to harness the energy of fossil fuels had an unwelcome effect on climate that standard Gaian forces won’t fix. That’s okay. Now our intentions are focused on fixing that problem. It will take a century or two, but I’m pretty sure we’ll succeed.

This is the reason to not be constantly obsessed with how civilization might end. It takes our eye off the main event, which is how we manage civilization’s continuity. Continuity is made partly of exploration, but most of the work is maintenance. That’s the strongest argument for protecting Nature, because Nature is the most enormous and consequential self-maintaining thing we know. 

We are learning to maintain the wild so that it can keep main­taining us. 

Folk singer Pete Seeger, when he was 85, said this: “You should consider that the essential art of civilization is maintenance.”

💡
Stewart Brand adapted this piece from a talk he gave for the Santa Fe Institute in November 02023. Further adapted, it will be part of his book MAINTENANCE: Of Everything, the first chapters of which will be published in 02025 by Stripe Press. They can be read online at https://books.worksinprogress.co/

Worse Than FailureCodeSOD: Magical Bytes

"Magic bytes" are a common part of a file header. The first few bytes of a file can often be used to identify what type of file it is. For example, a bitmap file starts with "BM", and a PGM file always starts with "PN" where "N" is a number between 1 and 6, describing the specific variant in use, and WAV files start with "RIFF".

Many files have less human-readable magic bytes, like the ones Christer was working with. His team was working on software to manipulate a variety of different CAD file types. One thing this code needed to do is identify when the loaded file was a CAD file, but not the specific UFF file type they were looking for. In this case, they need to check that the file does not start with 0xabb0, 0xabb1, or 0xabb3. It was trivially easy to write up a validation check to ensure that the files had the correct magic bytes. And yet, there is no task so easy that someone can't fall flat on their face while doing it.

This is how Christer's co-worker solved this problem:

const uint16_t *id = (uint16_t*)data.GetBuffer();
if (*id == 0xabb0 || *id == 0xABB0 || *id == 0xabb1 || *id == 0xABB1 || *id == 0xabb3 || *id == 0xABB3)
{
    return 0;
}

Here we have a case of someone who isn't clear on the difference between hexadecimal numbers and strings. Now, you (and the compiler) might think that 0xABB0 and 0xabb0 are, quite clearly, the same thing. But you don't understand the power of lowercase numbers. Here we have an entirely new numbering system where 0xABB0 and 0xabb0 are not equal, which also means 0xABB0 - 0xabb0 is non-zero. An entirely new field of mathematics lies before us, with new questions to be asked. If 0xABB0 < 0xABB1, is 0xABB0 < 0xabb1 also true? From this little code sample, we can't make any inferences, but these questions give us a rich field of useless mathematics to write papers about.

The biggest question of all, is that we know how to write lowercase numbers for A-F, but how do we write a lowercase 3?

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsUXO Hunter

Author: Julian Miles, Staff Writer Swinging into the forward turret, I see the displays are alive with scanner arrays and intricate calculations. “Morning, Hinton. How’s the hunting?” “Once again, Zaba, I’m going to ignore the irrelevance of arbitrary planetary platitudes. You’re clearly stuck in your ways. So, to answer: while swearing in exasperation is only […]

The post UXO Hunter appeared first on 365tomorrows.

David BrinTHE ANCIENT ONES lives! So does TASAT! And a SUNDIVER hardcover! Plus some Science Fictional Musings

They're alive! 

(1) I'm posting my SF comedy THE ANCIENT ONES!

Samples were available at davidbrin.com. Only now I'll go all the way through, one chapter per week (or so) on Wordpress. Come by for laughs + painful puns! And some sci fi concepts taken to extremes. Oh and there'll be freebies for best groaner comments to adjust the final version. (Full novel soon at Histria Press!)


(2)  After 44 years, there is (released today) a hardcover of my first novel SUNDIVER

It's a lovely, collectible edition with a gorgeous new cover and interiors by Jim Burns. From Phantasia Press. (Not cheap. But wow does Phantsia do good work!). (www.tinyurl.com/hardsundiver)


(3) Also the TASAT project is doing great! I've touted it before - a special service I've tried to bring into the world for almost 20 years. And now, thanks to master programmer Todd Zimmerman, it lives!

Come by TASAT.org and see how there's a small but real chance that nerdy SciFi readers like YOU might one day save the world!

So, yeah, I've been busy. 
And you should stay busy. 
And hence...
... want another predictive overlap between fiction and the onrushing future? 


                == Rich prepper ingrates using my ideas ==

Almost exactly as depicted in Existence -- and after conversation with some of the innovative schemers -- it seems that an island nation -- one that is threatened by rising seas -- is collaborating with world zillionaires to erect a lagoon-sheltered arcology-city One that can float above changing sea levels… providing both those zillionaires and the island nation’s elites with refuge from both angry Nature and the western tax man. Again, exactly as in EXISTENCE. Moreover, as a bonus, those zillionaire preppers will get a legacy-sovereign UN seat, like a heritable seat on a stock exchange or law firm. Well-schemed! 

And no need to pay the fellow who gave you the idea, in the first place? That, too, was predicted.

Speaking of predictive success, here’s an extremely minor forecast I'll now offer. Minor, but still, one for the registry. I foresee that folks will feel a small but real sense of relief when we reach the year 2032!  Why?  Because we will suddenly be able to use just 6 digits for the date, instead of 8.  Can you envision why?  Speak up, in comments.


Alas though, right about the same year religious zealotry will spread rapture ravings, like nothing we have ever seen before. Can you envision why? I explained here. Hopefully, by then we'll still have a civilization then that awards predictive points. And that has resumed confidence in science and reason.


Meanwhile... In Nautilus, Namir Khaliq interviews me, Doctorow, Bujold, Stross, Jemison and Weir about how science influences science fiction, especially in a time of looming AI.  Come for insights. 



== Are hidden or ‘shy’ aliens or AIs watching us right now? ==


The "After On" podcast posting by Rob Reid (author of the SF novels After On and Year Zero) about "Shy or Hidden AI" was fun and well-spoken... though incomplete and hence not fully persuasive. "Might we unwittingly start sharing our world with a super AI?" A monologue and (mostly) playful thought experiment. Listen on Apple Podcasts

Interesting? Well... let me shine light on one under-explored aspect. For about two decades, I've pretty often been interviewed about either aliens or Artificial Intelligence. Sometimes I trot out a particular riff, when it seems likely to be fun. Here goes:

"I can now reveal that I've been 'channeling' for hidden (aliens/AIs) who use me as their human 'front,' to publish odd thoughts, or their attempts at sci fi (some better than others!), or else in order to float ideas out there...

"...like this idea that I'm floating right now, that a particular human might collaborate in this way, fronting for cryptic (aliens/AIs) who are shy about being publicly revealed. An idea that nearly all human audiences have deemed benignly amusing, because they assume that I am joking!

"But then, isn't that what I'd be hired to do? Getting folks pondering the possibility, so that the hidden aliens or AIs might get a measure of the reactions? Perhaps to see if it's safe to come out?


"Or else... isn't this exactly what a rascal like me would say, in order to tease about weird ideas? Just like you idea-rascals out there pay me to do?"

In fact, most of the last fourth of my novel Existence ponders cryptic or 'shy" interstellar probes who (conjecturally) have been lurking in the asteroid belt for millions of years, and might recently have been probing our internet. In that novel I riff-contemplate a wide array of motives they might have for maintaining secrecy a while longer. And yes, using humans as intermediaries is one of a dozen (actually 13) possible scenarios.

Still, this is an interesting and fun... if incomplete ... podcast speculation about the possibility. Including the notion that instead of lurker alien probes, it might be 'Shy AIs' reading these words right now as I type them... or else from stored files, five years from now.

I do say other things to such aliens or AI lurkers, that I won't get into here. Suffice it that I give them grandfatherly advice! ;-)



== The deep context of sci fi ==


Possibly the greatest living epic poet – certainly of epic-length poems about the future or sci fi themes – is Frederick Turner.  He’s done me the honor of reading Vivid Tomorrows: Science Fiction and Hollywood And while agreeing with some of my points, he also demurs on others. 


"Yes, many of the great epics - works like the Mahabharata, the Heike and the Aeneid – do emphasize demigods in a context of assumed rule by kingship.  But that may not have been in preferred-contrast to then-unknown innovations like democracy." 


Rather -- Fred argues -- many of them do contain their own profound critiques of power abuse by hereditary kings. Further, many of those epics may have been viewed as liberal in their age and context, contrasting instead vs: “…the only prior alternative, bloody tribalism and what Marx called rural idiocy. The city was a huge achievement, and it needed walls and authority, and was the origin of law and advanced technology.”

Certainly it has been pointed out that the story of Cain vs Abel has a context of the resentment by hunters and herders against the encroachment of agriculturalists, who appear (from genetic evidence) to have been far more expansionist and violent. 


Indeed, a major new element has been added by genetic research which now says there was likely a huge Y Chromosome bottleneck around 10,000 years ago. Weirdly and inexplicably, it may have happened roughly simultaneously across much of the world -- a few centuries when only 17% or so of males got to breed. The implications are immense!  


For one thing, this was the disruptive time when we invented both beer and kings. It's known that humans had less resistance to alcohol back then - more like other mammals, who are still very susceptible. So drunken boors musta been much more common. And this coincided with the new kings of minor agricultural realms, who abruptly possessed a power that tribal chiefs never had - to order killed anyone they didn't like, including drunken boors. (This was actually observed by Captain Cook etc., in Polynesia.)  Hence, just one effect could have been a major quick-evolution toward more self control re alcohol. (Still incomplete, alas.)


It seems that this phase only lasted a few centuries, until kingdoms grew larger. At which point the top king's harems were as big as he could handle and he gained nothing further by allowing the lords under him to keep rampage-murdering other males. In fact, the local top king would lose soldiers that he needed against other mid-sized kings.  Hence, there arrived rule of law against capricious murder... even by local lords... and the Y Chromosome bottleneck stabilized.


Sure, that is a speculative take on recent discoveries. But if all that is true, then there may be real implications for the vast oeuvre of oral mythology, coming to us from that era. The roots of all our heritage might be re-examined in new light.



== Pertinent Prescience? ==


Octavia Butler's terrific 1998 novel, Parable of the Talents, depicts a dystopian US ruled by fundamentalists. President Jarret seeks to rid the country of non-Christian beliefs, using the slogan.... "Make America Great Again." Sigh and alack.

And I miss her.


And her memory reminds me that we need to Make America smart and good and wise (and thus actually Great) someday yet again.





,

Planet DebianSteinar H. Gunderson: plocate 1.1.23 released

I've just released version 1.1.23 of plocate, almost a year after 1.1.22. The changes are mostly around the systemd unit this time, but perhaps more interestingly is that this is the first release where I don't have the majority of patches; in fact, I don't have any patches at all. All of them came from contributors, many of them through the “just do git push to send me a patch email” interface.

I guess this means that I'll need to actually start streamlining my “git am” workflow… it gets me every time. :-)

Planet DebianEdward Betts: A mini adventure at MiniDebConf Toulouse

A mini adventure at MiniDebConf Toulouse

Last week, I ventured to Toulouse, for a delightful mix of coding, conversation, and crepes at MiniDebConf Toulouse, part of the broader Capitole du Libre conference, akin to the more well-known FOSDEM but with a distinctly French flair.

This was my fourth and final MiniDebConf of the year.

no jet bridge

My trek to Toulouse was seamless. I hopped on a bus from my home in Bristol to the airport, then took a short flight. I luxuriated in seat 1A, making me the first to disembark—a mere ten minutes later, I was already on the bus heading to my hotel.

Exploring the Pink City

pink

img 29

duck shop

Once settled, I wasted no time exploring the charms of Toulouse. Just a short stroll from my hotel, I found myself beside a tranquil canal, its waters mirroring the golden hues of the trees lining its banks. Autumn in Toulouse painted the city in warm oranges and reds, creating a picturesque backdrop that was a joy to wander through. Every corner of the street revealed more of the city's rich cultural tapestry and striking architecture. Known affectionately as 'La Ville Rose' (The Pink City) for its unique terracotta brickwork, Toulouse captivated me with its blend of historical allure and vibrant modern life.

MiniDebCamp

FabLab sign

laptop setup

Prior to the main event, the MiniDebCamp provided two days of hacking at Artilect FabLab—a space as creative as it was welcoming. It was a pleasure to reconnect with familiar faces and forge new friendships.

Culinary delights

lunch 1

img 14

img 15

img 16

img 17

cakes

The hospitality was exceptional. Our lunches boasted a delicious array of quiches, an enticing charcuterie board, and a superb selection of cheeses, all perfectly complemented by exquisite petite fours. Each item was not only a feast for the eyes but also a delight for the palate.

Wine and cheese

wine and cheese 1

wine and cheese 2

Leftovers from these gourmet feasts fuelled our impromptu cheese and wine party on Thursday evening—a highlight where informal chats blended seamlessly with serious software discussions.

The river at night

night river 1

night river 2

night river 3

night river 4

The enchantment of Toulouse doesn't dim with the setting sun; instead, it transforms. My evening strolls took me along the banks of the Garonne, under a sky just turning from twilight to velvet blue. The river, a dark mirror, perfectly reflected the illuminated grandeur of the city's architecture. Notably, the dome of the Hôpital de La Grave stood out, bathed in a warm glow against the night sky. This architectural gem, coupled with the soft lights of the bridge and the serene river, created a breathtaking scene that was both tranquil and awe-inspiring.

Capitole du Libre

making crepes

The MiniDebConf itself, part of the larger Capitole du Libre event, was a fantastic immersion into the world of free software. Unlike the ticket-free FOSDEM, this conference required QR codes for entry and even had bag searches, adding an unusual layer of security for a software conference.

Highlights included the crepe-making by the organisers, reminiscent of street food scenes from larger festivals. The availability of crepes for MiniDebConf attendees and the presence of food trucks added a festive air, albeit with the inevitable long queues familiar to any festival-goer.

vélôToulouse

bike

cyclocity

The city's bike rental system was a boon—easy to use with handy bike baskets perfect for casual city touring. I chose pedal power over electric, finding it a pleasant way to navigate the streets and absorb the city's vibrant atmosphere.

Markets

market

flatbreads

Toulouse's markets were a delightful discovery. From a spontaneous visit to a market near my hotel upon arrival, to cycling past bustling marketplaces, each day presented new local flavours and crafts to explore.

The Za'atar flatbread from a Syrian stall was a particularly memorable lunch pick.

La brasserie Les Arcades

img 25

img 26

img 27

Our conference wrapped up with a spontaneous gathering at La Brasserie Les Arcades in Place du Capitole. Finding a café that could accommodate 30 of us on a Sunday evening without a booking felt like striking gold. What began with coffee and ice cream smoothly transitioned into dinner, where I enjoyed a delicious braised duck leg with green peppercorn sauce. This meal rounded off the trip with lively conversations and shared experiences.

The journey back home

img 30

img 31

img 32

img 33

Returning from Toulouse, I found myself once again in seat 1A, offering the advantage of being the first off the plane, both on departure and arrival. My flight touched down in Bristol ahead of schedule, and within ten minutes, I was on the A1 bus, making my way back into the heart of Bristol.

Anticipating DebConf 25 in Brittany

My trip to Toulouse for MiniDebConf was yet another fulfilling experience; the city was delightful, and the talks were insightful. While I frequently travel, these journeys are more about continuous learning and networking than escape. The food in Toulouse was particularly impressive, a highlight I've come to expect and relish on my trips to France. Looking ahead, I'm eagerly anticipating DebConf in Brest next year, especially the opportunity to indulge once more in the excellent French cuisine and beverages.

365 TomorrowsGood Taste

Author: Alastair Millar “C’mon, Jack, the fans will just eat this up! You know they will” “That’s kind of the issue, Morty.” “It’s totally ethical, don’t worry about it.” “But is it tasteful? There’s going to be people that hate it.” “Then they can choke on it! This is the future. I’ve wrangled you the […]

The post Good Taste appeared first on 365tomorrows.

,

365 TomorrowsStem Cell No. 173

Author: Yueyang Wang No.365: assigned as Breeder for Stem Cell No. 173. I saw my number on the screen. This was my first time breeding. The surgery started immediately. Mechanical arms extended a catheter into my body, warm fluid surged in, and then instruments moved back and forth within me. Minutes later, the seed had […]

The post Stem Cell No. 173 appeared first on 365tomorrows.

,

Planet DebianMatthew Palmer: Your Release Process Sucks

For the past decade-plus, every piece of software I write has had one of two release processes.

Software that gets deployed directly onto servers (websites, mostly, but also the infrastructure that runs Pwnedkeys, for example) is deployed with nothing more than git push prod main. I’ll talk more about that some other day.

Today is about the release process for everything else I maintain – Rust / Ruby libraries, standalone programs, and so forth. To release those, I use the following, extremely intricate process:

  1. Create an annotated git tag, where the name of the tag is the software version I’m releasing, and the annotation is the release notes for that version.

  2. Run git release in the repository.

  3. There is no step 3.

Yes, it absolutely is that simple. And if your release process is any more complicated than that, then you are suffering unnecessarily.

But don’t worry. I’m from the Internet, and I’m here to help.

Sidebar: “annotated what-now?!?”

The annotated tag is one git’s best-kept secrets. They’ve been available in git for practically forever (I’ve been using them since at least 2014, which is “practically forever” in software development), yet almost everyone I mention them to has never heard of them.

A “tag”, in git parlance, is a repository-unique named label that points to a single commit (as identified by the commit’s SHA1 hash). Annotating a tag is simply associating a block of free-form text with that tag.

Creating an annotated tag is simple-sauce: git tag -a tagname will open up an editor window where you can enter your annotation, and git tag -a -m "some annotation" tagname will create the tag with the annotation “some annotation”. Retrieving the annotation for a tag is straightforward, too: git show tagname will display the annotation along with all the other tag-related information.

Now that we know all about annotated tags, let’s talk about how to use them to make software releases freaking awesome.

Step 1: Create the Annotated Git Tag

As I just mentioned, creating an annotated git tag is pretty simple: just add a -a (or --annotate, if you enjoy typing) to your git tag command, and WHAM! annotation achieved.

Releases, though, typically have unique and ever-increasing version numbers, which we want to encode in the tag name. Rather than having to look at the existing tags and figure out the next version number ourselves, we can have software do the hard work for us.

Enter: git-version-bump. This straightforward program takes one mandatory argument: major, minor, or patch, and bumps the corresponding version number component in line with Semantic Versioning principles. If you pass it -n, it opens an editor for you to enter the release notes, and when you save out, the tag is automagically created with the appropriate name.

Because the program is called git-version-bump, you can call it as a git command: git version-bump. Also, because version-bump is long and unwieldy, I have it aliased to vb, with the following entry in my ~/.gitconfig:

[alias]
    vb = version-bump -n

Of course, you don’t have to use git-version-bump if you don’t want to (although why wouldn’t you?). The important thing is that the only step you take to go from “here is our current codebase in main” to “everything as of this commit is version X.Y.Z of this software”, is the creation of an annotated tag that records the version number being released, and the metadata that goes along with that release.

Step 2: Run git release

As I said earlier, I’ve been using this release process for over a decade now. So long, in fact, that when I started, GitHub Actions didn’t exist, and so a lot of the things you’d delegate to a CI runner these days had to be done locally, or in a more ad-hoc manner on a server somewhere.

This is why step 2 in the release process is “run git release”. It’s because historically, you can’t do everything in a CI run. Nowadays, most of my repositories have this in the .git/config:

[alias]
    release = push --tags

Older repositories which, for one reason or another, haven’t been updated to the new hawtness, have various other aliases defined, which run more specialised scripts (usually just rake release, for Ruby libraries), but they’re slowly dying out.

The reason why I still have this alias, though, is that it standardises the release process. Whether it’s a Ruby gem, a Rust crate, a bunch of protobuf definitions, or whatever else, I run the same command to trigger a release going out. It means I don’t have to think about how I do it for this project, because every project does it exactly the same way.

The Wiring Behind the Button

It wasn’t the button that was the problem. It was the miles of wiring, the hundreds of miles of cables, the circuits, the relays, the machinery. The engine was a massive, sprawling, complex, mind-bending nightmare of levers and dials and buttons and switches. You couldn’t just slap a button on the wall and expect it to work. But there should be a button. A big, fat button that you could press and everything would be fine again. Just press it, and everything would be back to normal.

  • Red Dwarf: Better Than Life

Once you’ve accepted that your release process should be as simple as creating an annotated tag and running one command, you do need to consider what happens afterwards. These days, with the near-universal availability of CI runners that can do anything you need in an isolated, reproducible environment, the work required to go from “annotated tag” to “release artifacts” can be scripted up and left to do its thing.

What that looks like, of course, will probably vary greatly depending on what you’re releasing. I can’t really give universally-applicable guidance, since I don’t know your situation. All I can do is provide some of my open source work as inspirational examples.

For starters, let’s look at a simple Rust crate I’ve written, called strong-box. It’s a straightforward crate, that provides ergonomic and secure cryptographic functionality inspired by the likes of NaCl. As it’s just a crate, its release script is very straightforward. Most of the complexity is working around Cargo’s inelegant mandate that crate version numbers are specified in a TOML file. Apart from that, it’s just a matter of building and uploading the crate. Easy!

Slightly more complicated is action-validator. This is a Rust CLI tool which validates GitHub Actions and Workflows (how very meta) against a published JSON schema, to make sure you haven’t got any syntax or structural errors. As not everyone has a Rust toolchain on their local box, the release process helpfully build binaries for several common OSes and CPU architectures that people can download if they choose. The release process in this case is somewhat larger, but not particularly complicated. Almost half of it is actually scaffolding to build an experimental WASM/NPM build of the code, because someone seemed rather keen on that.

Moving away from Rust, and stepping up the meta another notch, we can take a look at the release process for git-version-bump itself, my Ruby library and associated CLI tool which started me down the “Just Tag It Already” rabbit hole many years ago. In this case, since gemspecs are very amenable to programmatic definition, the release process is practically trivial. Remove the boilerplate and workarounds for GitHub Actions bugs, and you’re left with about three lines of actual commands.

These approaches can certainly scale to larger, more complicated processes. I’ve recently implemented annotated-tag-based releases in a proprietary software product, that produces Debian/Ubuntu, RedHat, and Windows packages, as well as Docker images, and it takes all of the information it needs from the annotated tag. I’m confident that this approach will successfully serve them as they expand out to build AMIs, GCP machine images, and whatever else they need in their release processes in the future.

Objection, Your Honour!

I can hear the howl of the “but, actuallys” coming over the horizon even as I type. People have a lot of Big Feelings about why this release process won’t work for them. Rather than overload this article with them, I’ve created a companion article that enumerates the objections I’ve come across, and answers them. I’m also available for consulting if you’d like a personalised, professional opinion on your specific circumstances.

DVD Bonus Feature: Pre-releases

Unless you’re addicted to surprises, it’s good to get early feedback about new features and bugfixes before they make it into an official, general-purpose release. For this, you can’t go past the pre-release.

The major blocker to widespread use of pre-releases is that cutting a release is usually a pain in the behind. If you’ve got to edit changelogs, and modify version numbers in a dozen places, then you’re entirely justified in thinking that cutting a pre-release for a customer to test that bugfix that only occurs in their environment is too much of a hassle.

The thing is, once you’ve got releases building from annotated tags, making pre-releases on every push to main becomes practically trivial. This is mostly due to another fantastic and underused Git command: git describe.

How git describe works is, basically, that it finds the most recent commit that has an associated annotated tag, and then generates a string that contains that tag’s name, plus the number of commits between that tag and the current commit, with the current commit’s hash included, as a bonus. That is, imagine that three commits ago, you created an annotated release tag named v4.2.0. If you run git describe now, it will print out v4.2.0-3-g04f5a6f (assuming that the current commit’s SHA starts with 04f5a6f).

You might be starting to see where this is going. With a bit of light massaging (essentially, removing the leading v and replacing the -s with .s), that string can be converted into a version number which, in most sane environments, is considered “newer” than the official 4.2.0 release, but will be superceded by the next actual release (say, 4.2.1 or 4.3.0). If you’re already injecting version numbers into the release build process, injecting a slightly different version number is no work at all.

Then, you can easily build release artifacts for every commit to main, and make them available somewhere they won’t get in the way of the “official” releases. For example, in the proprietary product I mentioned previously, this involves uploading the Debian packages to a separate component (prerelease instead of main), so that users that want to opt-in to the prerelease channel simply modify their sources.list to change main to prerelease. Management have been extremely pleased with the easy availability of pre-release packages; they’ve been gleefully installing them willy-nilly for testing purposes since I rolled them out.

In fact, even while I’ve been writing this article, I was asked to add some debug logging to help track down a particularly pernicious bug. I added the few lines of code, committed, pushed, and went back to writing. A few minutes later (next week’s job is to cut that in-process time by at least half), the person who asked for the extra logging ran apt update; apt upgrade, which installed the newly-built package, and was able to progress in their debugging adventure.

Continuous Delivery: It’s Not Just For Hipsters.

“+1, Informative”

Hopefully, this has spurred you to commit your immortal soul to the Church of the Annotated Tag. You may tithe by buying me a refreshing beverage. Alternately, if you’re really keen to adopt more streamlined release management processes, I’m available for consulting engagements.

Planet DebianMatthew Palmer: Invalid Excuses for Why Your Release Process Sucks

In my companion article, I made the bold claim that your release process should consist of no more than two steps:

  1. Create an annotated Git tag;

  2. Run a single command to trigger the release pipeline.

As I have been on the Internet for more than five minutes, I’m aware that a great many people will have a great many objections to this simple and straightforward idea. In the interests of saving them a lot of wear and tear on their keyboards, I present this list of common reasons why these objections are invalid.

If you have an objection I don’t cover here, the comment box is down the bottom of the article. If you think you’ve got a real stumper, I’m available for consulting engagements, and if you turn out to have a release process which cannot feasibly be reduced to the above two steps for legitimate technical reasons, I’ll waive my fees.

“But I automatically generate my release notes from commit messages!”

This one is really easy to solve: have the release note generation tool feed directly into the annotation. Boom! Headshot.

“But all these files need to be edited to make a release!”

No, they absolutely don’t. But I can see why you might think you do, given how inflexible some packaging environments can seem, and since “that’s how we’ve always done it”.

Language Packages

Most languages require you to encode the version of the library or binary in a file that you want to revision control. This is teh suck, but I’m yet to encounter a situation that can’t be worked around some way or another.

In Ruby, for instance, gemspec files are actually executable Ruby code, so I call code (that’s part of git-version-bump, as an aside) to calculate the version number from the git tags. The Rust build tool, Cargo, uses a TOML file, which isn’t as easy, but a small amount of release automation is used to take care of that.

Distribution Packages

If you’re building Linux distribution packages, you can easily apply similar automation faffery. For example, Debian packages take their metadata from the debian/changelog file in the build directory. Don’t keep that file in revision control, though: build it at release time. Everything you need to construct a Debian (or RPM) changelog is in the tag – version numbers, dates, times, authors, release notes. Use it for much good.

The Dreaded Changelog

Finally, there’s the CHANGELOG file. If it’s maintained during the development process, it typically has an archive of all the release notes, under version numbers, with an “Unreleased” heading at the top. It’s one more place to remember to have to edit when making that “preparing release X.Y.Z” commit, and it is a gift to the Demon of Spurious Merge Conflicts if you follow the policy of “every commit must add a changelog entry”.

My solution: just burn it to the ground. Add a line to the top with a link to wherever the contents of annotated tags get published (such as GitHub Releases, if that’s your bag) and never open it ever again.

“But I need to know other things about my release, too!”

For some reason, you might think you need some other metadata about your releases. You’re probably wrong – it’s amazing how much information you can obtain or derive from the humble tag – so think creatively about your situation before you start making unnecessary complexity for yourself.

But, on the off chance you’re in a situation that legitimately needs some extra release-related information, here’s the secret: structured annotation. The annotation on a tag can be literally any sequence of octets you like. How that data is interpreted is up to you.

So, require that annotations on release tags use some sort of structured data format (say YAML or TOML – or even XML if you hate your release manager), and mandate that it contain whatever information you need. You can make sure that the annotation has a valid structure and contains all the information you need with an update hook, which can reject the tag push if it doesn’t meet the requirements, and you’re sorted.

“But I have multiple packages in my repo, with different release cadences and versions!”

This one is common enough that I just refer to it as “the monorepo drama”. Personally, I’m not a huge fan of monorepos, but you do you, boo. Annotated tags can still handle it just fine.

The trick is to include the package name being released in the tag name. So rather than a release tag being named vX.Y.Z, you use foo/vX.Y.Z, bar/vX.Y.Z, and baz/vX.Y.Z. The release automation for each package just triggers on tags that match the pattern for that particular package, and limits itself to those tags when figuring out what the version number is.

“But we don’t semver our releases!”

Oh, that’s easy. The tag pattern that marks a release doesn’t have to be vX.Y.Z. It can be anything you want.

Relatedly, there is a (rare, but existent) need for packages that don’t really have a conception of “releases” in the traditional sense. The example I’ve hit most often is automatically generated “bindings” packages, such as protobuf definitions. The source of truth for these is a bunch of .proto files, but to be useful, they need to be packaged into code for the various language(s) you’re using. But those packages need versions, and while someone could manually make releases, the best option is to build new per-language packages automatically every time any of those definitions change.

The versions of those packages, then, can be datestamps (I like something like YYYY.MM.DD.N, where N starts at 0 each day and increments if there are multiple releases in a single day).

This process allows all the code that needs the definitions to declare the minimum version of the definitions that it relies on, and everything is kept in sync and tracked almost like magic.

Th-th-th-th-that’s all, folks!

I hope you’ve enjoyed this bit of mild debunking. Show your gratitude by buying me a refreshing beverage, or purchase my professional expertise and I’ll answer all of your questions and write all your CI jobs.

Worse Than FailureError'd: Three Little Nyms

"Because 9.975 was just a *little* bit too small," explains our first anonymous helper.

0

 

Our second anonymous helper tells us "While looking up how to find my banks branch using a blank check, I came across this site that seems to have used AI to write their posts. Didn't expect to learn about git while reading about checks. I included the navbar because its just as bad."

1

 

Our third anonymous helper snickered "I guess I was just a bit over quota." Nicely done.

4

 

Our fourth anonymous helper isn't actually anonymous, alas. He signed off as the plausibly-named Vincent R, muttering "I dunno, it's all Greek to me. Or at least it *was* Greek until Firefox thoughtfully translated all the lambdas and mus and sigmas in these probability formulas..."

2

 

Finally for Friday, the fifth from Dan W. "On my way to the airport, I checked my route on the Trainline app. I think I'll have just enough time to make this connection in Wolverhampton." Walk, don't run.

3

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsThe Prisoner

Author: Srdjan Budisavljevic ..Finally, he opened his eyes. The feeling was strange, like the feeling of rebirth. Faint images of his former existence sporadically surfaced in his consciousness, but he was unable to recognize those memories as integral parts of his existential continuity. The first thing he felt was amazement, immense amazement, and then his […]

The post The Prisoner appeared first on 365tomorrows.

,

LongNowStephen Heintz & Kim Stanley Robinson

Stephen Heintz & Kim Stanley Robinson

Stephen Heintz and Kim Stanley Robinson will discuss our polycrisis, and the swift and holistic reform of global governance institutions that is needed to respond to these urgent transnational and planetary challenges we are facing.

We are living in an age of exceptional complexity and turbulence. What distinguishes this period in human history is the confluence of forces— political, geo-strategic, economic, social, technological, and environmental, as well as the interactions amongst them. These crises have no regard for borders and are not responsive to solutions devised and implemented by individual nation-states or the existing ecosystem of multilateral institutions.

Throughout history, we can find examples of hinge moments when human resilience, imagination, and cooperation spurred change once thought impossible. We are in a period of elasticity, a time when there is greater capacity for stretch in our conceptions of global relations and thinking about the international system. Humankind must strive to develop an international framework that can guide us toward a more peaceful, more humane, and more equitable global society, as well as a thriving planetary ecosystem.

Krebs on SecurityFeds Charge Five Men in ‘Scattered Spider’ Roundup

Federal prosecutors in Los Angeles this week unsealed criminal charges against five men alleged to be members of a hacking group responsible for dozens of cyber intrusions at major U.S. technology companies between 2021 and 2023, including LastPass, MailChimp, Okta, T-Mobile and Twilio.

A visual depiction of the attacks by the SMS phishing group known as Scattered Spider, and Oktapus. Image: Amitai Cohen twitter.com/amitaico.

The five men, aged 20 to 25, are allegedly members of a hacking conspiracy dubbed “Scattered Spider” and “Oktapus,” which specialized in SMS-based phishing attacks that tricked employees at tech firms into entering their credentials and one-time passcodes at phishing websites.

The targeted SMS scams asked employees to click a link and log in at a website that mimicked their employer’s Okta authentication page. Some SMS phishing messages told employees their VPN credentials were expiring and needed to be changed; other phishing messages advised employees about changes to their upcoming work schedule.

These attacks leveraged newly-registered domains that often included the name of the targeted company, such as twilio-help[.]com and ouryahoo-okta[.]com. The phishing websites were normally kept online for just one or two hours at a time, meaning they were often yanked offline before they could be flagged by anti-phishing and security services.

The phishing kits used for these campaigns featured a hidden Telegram instant message bot that forwarded any submitted credentials in real-time. The bot allowed the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website.

In August 2022, multiple security firms gained access to the server that was receiving data from that Telegram bot, which on several occasions leaked the Telegram ID and handle of its developer, who used the nickname “Joeleoli.”

The Telegram username “Joeleoli” can be seen sandwiched between data submitted by people who knew it was a phish, and data phished from actual victims. Click to enlarge.

That Joeleoli moniker registered on the cybercrime forum OGusers in 2018 with the email address joelebruh@gmail.com, which also was used to register accounts at several websites for a Joel Evans from North Carolina. Indeed, prosecutors say Joeleoli’s real name is Joel Martin Evans, and he is a 25-year-old from Jacksonville, North Carolina.

One of Scattered Spider’s first big victims in its 2022 SMS phishing spree was Twilio, a company that provides services for making and receiving text messages and phone calls. The group then used their access to Twilio to attack at least 163 of its customers. According to prosecutors, the group mainly sought to steal cryptocurrency from victim companies and their employees.

“The defendants allegedly preyed on unsuspecting victims in this phishing scheme and used their personal information as a gateway to steal millions in their cryptocurrency accounts,” said Akil Davis, the assistant director in charge of the FBI’s Los Angeles field office.

Many of the hacking group’s phishing domains were registered through the registrar NameCheap, and FBI investigators said records obtained from NameCheap showed the person who managed those phishing websites did so from an Internet address in Scotland. The feds then obtained records from Virgin Media, which showed the address was leased for several months to Tyler Buchanan, a 22-year-old from Dundee, Scotland.

A Scattered Spider phishing lure sent to Twilio employees.

As first reported here in June, Buchanan was arrested in Spain as he tried to board a flight bound for Italy. The Spanish police told local media that Buchanan, who allegedly went by the alias “Tylerb,” at one time possessed Bitcoins worth $27 million.

The government says much of Tylerb’s cryptocurrency wealth was the result of successful SIM-swapping attacks, wherein crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls sent to the victim — including one-time passcodes for authentication, or password reset links sent via SMS.

According to several SIM-swapping channels on Telegram where Tylerb was known to frequent, rival SIM-swappers hired thugs to invade his home in February 2023. Those accounts state that the intruders assaulted Tylerb’s mother in the home invasion, and that they threatened to burn him with a blowtorch if he didn’t give up the keys to his cryptocurrency wallets. Tylerb was reputed to have fled the United Kingdom after that assault.

A still frame from a video released by the Spanish national police, showing Tyler Buchanan being taken into custody at the airport.

Prosecutors allege Tylerb worked closely on SIM-swapping attacks with Noah Michael Urban, another alleged Scattered Spider member from Palm Coast, Fla. who went by the handles “Sosa,” “Elijah,” and “Kingbob.”

Sosa was known to be a top member of the broader cybercriminal community online known as “The Com,” wherein hackers boast loudly about high-profile exploits and hacks that almost invariably begin with social engineering — tricking people over the phone, email or SMS into giving away credentials that allow remote access to corporate networks.

In January 2024, KrebsOnSecurity broke the news that Urban had been arrested in Florida in connection with multiple SIM-swapping attacks. That story noted that Sosa’s alter ego Kingbob routinely targeted people in the recording industry to steal and share “grails,” a slang term used to describe unreleased music recordings from popular artists.

FBI investigators identified a fourth alleged member of the conspiracy – Ahmed Hossam Eldin Elbadawy, 23, of College Station, Texas — after he used a portion of cryptocurrency funds stolen from a victim company to pay for an account used to register phishing domains.

The indictment unsealed Wednesday alleges Elbadawy controlled a number of cryptocurrency accounts used to receive stolen funds, along with another Texas man — Evans Onyeaka Osiebo, 20, of Dallas.

Members of Scattered Spider are reputed to have been involved in a September 2023 ransomware attack against the MGM Resorts hotel chain that quickly brought multiple MGM casinos to a standstill. In September 2024, KrebsOnSecurity reported that a 17-year-old from the United Kingdom was arrested last year by U.K. police as part of an FBI investigation into the MGM hack.

Evans, Elbadawy, Osiebo and Urban were all charged with one count of conspiracy to commit wire fraud, one count of conspiracy, and one count of aggravated identity theft. Buchanan, who is named as an indicted co-conspirator, was charged with conspiracy to commit wire fraud, conspiracy, wire fraud, and aggravated identity theft.

A Justice Department press release states that if convicted, each defendant would face a statutory maximum sentence of 20 years in federal prison for conspiracy to commit wire fraud, up to five years in federal prison for the conspiracy count, and a mandatory two-year consecutive prison sentence for aggravated identity theft. Buchanan would face up to 20 years in prison for the wire fraud count as well.

Further reading:

The redacted complaint against Buchanan (PDF)

Charges against Urban and the other defendants (PDF).

Cryptogram Friday Squid Blogging: Squid-Inspired Needle Technology

Interesting research:

Using jet propulsion inspired by squid, researchers demonstrate a microjet system that delivers medications directly into tissues, matching the effectiveness of traditional needles.

Blog moderation policy.

365 TomorrowsPlanet X

Author: Jas Howson Xero had been scouring the planet for scrap parts for half the day. When she and her partner crashed, their comms device, along with the rest of the important equipment on their ship – and their ship – had scattered across the planet. Frequent sandstorms prevent one from simply scanning the surface […]

The post Planet X appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Contact Us

Charles is supporting a PHP based application. One feature of the application is a standard "Contact Us" form. I'll let Charles take on the introduction:

While it looks fine on the outside, the code is a complete mess. The entire site is built with bad practices, redundant variables, poor validation, insecure cookie checks, and zero focus on maintainability or security. Even the core parts of the platform are a nightmare

We're going to take this one in chunks, because it's big and ugly.

try {
    if (isset($_POST)) {
        $name = $_POST['objName'];
        $lst_name = $_POST['objLstName'];
        $email = $_POST['objEmail'];
        $phone = $_POST['objGsm'];
        $message = $_POST['objMsg'];
        $verifycode = $_POST['objVerifyCode'];
        /******************************************************/
        $objCmpT = $_POST['objCmpT'];
        $objCmpS = $_POST['objCmpS'];
        $objCountry = $_POST['objCountry'];
        $objCity = $_POST['objCity'];
        $objName2 = $_POST['objName2'];
        $objLstName2 = $_POST['objLstName2'];
        $objQuality = $_POST['objQuality'];
        $objEmail = $_POST['objEmail'];
        $objMsg2 = $_POST['objMsg2'];
        $objVerifyCode2 = $_POST['objVerifyCode2'];

I don't love that there's no structure or class here, to organize these fields, but this isn't bad, per se. We have a bunch of form fields, and we jam them into a bunch of variables. I am going to, with no small degree of willpower, not comment on the hungarian notation present in the field names. Look at me not commenting on it. I'm definitely not commenting on it. Look at me not commenting that some, but not all, of the variables also get the same hungarian prefix.

What's the point of hungarian notation when everything just gets the same thing anyway; like hungarian is always bad, but this is just USELESS

Ahem.

Let's continue with the code.

        $ok = 0;
        $ok2 = 0;
        $sendTo = "example@example.com";
        $golableMSG = '
        -First Name & Last Name :' . $name . ' ' . $lst_name . '
        -email :' . $email . '
        -Phone Number : 0' . $phone . '
        -Message : ' . $message;
        $globaleMSG2 = '
        -First Name & Last Name :' . $objName2 . ' ' . $objLstName2 . '
        -Email :' . $objEmail . '
        -Type of company : ' . $objCmpT . '
        -Sector of activity : ' . $objCmpS . '
        -Country : ' . $objCountry . '
        -City : ' . $objCity . '
        -Your position within the company : ' . $objQuality . '
        -Message : ' . $objMsg2;

We munge all those form fields into strings. These are clearly going to be the bodies of our messages. Only now I'm noticing that the user had to supply two different names- $name and $objName2. Extra points here, as I believe they meant to name both of these message variables globaleMSG but misspelled the first one, golableMSG.

Well, let's continue.

        if (!$name) {
            $data['msg1'] = '*';
        } else {
            $ok++;
            $data['msg1'] = '';
        }
        if (!$lst_name) {
            $data['msg2'] = '*';
        } else {
            $ok++;
            $data['msg2'] = '';
        }
        if (!$email) {
            $data['msg3'] = '*';
        } else {
            $ok++;
            $data['msg3'] = '';
        }
        if ($phone <= 0) {
            $data['msg4'] = '*';
        } else {
            $ok++;
            $data['msg4'] = '';
        }
        if (!$message) {
            $data['msg5'] = '*';
        } else {
            $ok++;
            $data['msg5'] = '';
        }
        if (!$verifycode) {
            $data['msg6'] = '*';
        } else {
            $ok++;
            $data['msg6'] = '';
        }
        /*********************************************************************************/
        if (!$objCmpS) {
            $data['msg7'] = '*';
        } else {
            $ok2++;
            $data['msg7'] = '';
        }
        if (!$objCountry) {
            $data['msg8'] = '*';
        } else {
            $ok2++;
            $data['msg8'] = '';
        }
        if (!$objCity) {
            $data['msg9'] = '*';
        } else {
            $ok2++;
            $data['msg9'] = '';
        }
        if (!$objName2) {
            $data['msg10'] = '*';
        } else {
            $ok2++;
            $data['msg10'] = '';
        }
        if (!$objLstName2) {
            $data['msg11'] = '*';
        } else {
            $ok2++;
            $data['msg11'] = '';
        }
        if (!$objQuality) {
            $data['msg12'] = '*';
        } else {
            $ok2++;
            $data['msg12'] = '';
        }
        if (!$objMsg2) {
            $data['msg13'] = '*';
        } else {
            $ok2++;
            $data['msg13'] = '';
        }
        if (!$objVerifyCode2) {
            $data['msg14'] = '*';
        } else {
            $ok2++;
            $data['msg14'] = '';
        }

What… what are we doing here? I worry that what I'm looking at here is some sort of preamble to verification code. But why is it like this? Why?

        /********************************************************************************/
        if ($ok == 6) {
            if (preg_match("/^[ a-z,.+!:;()-]+$/", $name)) {
                $data['msg1_1'] = '';
                if (preg_match("/^[ a-z,.+!:;()-]+$/", $lst_name)) {
                    $data['msg2_2'] = '';
                    $subject = $name . " " . $lst_name;
                    if (filter_var($email, FILTER_VALIDATE_EMAIL)) {
                        $data['msg3_3'] = '';
                        $from = $email;
                        if (preg_match("/^[6-9][0-9]{8}$/", $phone)) {
                            $data['msg4_4'] = '';
                            if (intval($verifycode) == intval($_COOKIE['nmbr1']) + intval($_COOKIE['nmbr2'])) {
                                $data['msg6_6'] = '';
                                $headers = 'From: ' . $from . "\r\n" .
                                    'Reply-To: ' . $sendTo . "\r\n" .
                                    'X-Mailer: PHP/' . phpversion();
                                mail($sendTo, $subject, $golableMSG, $headers);
                                $data['msgfinal'] = 'Votre Messsage est bien envoyer';
                                /*$data = array('success' => 'Votre Messsage est bien envoyer', 'postData' => $_POST);*/
                            } else {
                                $data['msg6_6'] = 'votre resultat est incorrect';
                            }
                        } else {
                            $data['msg4_4'] = 'Votre Numéro est incorrect';
                        }
                    } else {
                        $data['msg3_3'] = 'Votre Email est incorrect';
                    }
                } else {
                    $data['msg2_2'] = 'Votre Prénom est Incorrect';
                }
            } else {
                $data['msg1_1'] = 'Votre Nom est Incorrect';
            }
        }

Oh look, it is validation code. Their verification code system, presumably to prevent spamming messages, is not particularly secure or useful. The real thing I see here, though, is the namespaced keys. Earlier, we set $data['msg1'], and now we're setting $data['msg1_1'] which is a code stench that could kill from a hundred yards.

And don't worry, we do the same thing for the other message we send:

        /**************************************************************/
        if ($ok2 == 8) {
            if (preg_match("/^[ a-z,.+!:;()-]+$/", $objName2)) {
                $data['msg10_10'] = '';
                if (preg_match("/^[ a-z,.+!:;()-]+$/", $objLstName2)) {
                    $data['msg11_11'] = '';
                    $subject2 = $objName2 . " " . $objLstName2;
                    if (intval($objVerifyCode2) == intval($_COOKIE['nmbr3']) + intval($_COOKIE['nmbr4'])) {
                        $from2 = $objEmail;
                        $data['msg14_14'] = '';
                        $headers2 = 'From: ' . $from2 . "\r\n" .
                            'Reply-To: ' . $sendTo . "\r\n" .
                            'X-Mailer: PHP/' . phpversion();
                        mail($sendTo, $subject2, $globaleMSG2, $headers2);
                        $data['msgfinal'] = 'Votre Messsage est bien envoyer';
                    } else {
                        $data['msg14_14'] = 'votre resultat est incorrect';
                    }
                } else {
                    $data['msg11_11'] = 'Votre Prénom est Incorrect';
                }
            } else {
                $data['msg10_10'] = 'Votre Nom est Incorrect';
            }
        }

Phew. Hey, remember way back at the top, when we checked to see if the $_POST variable were set? Well, we do have an else clause for that.

    } else {
        throw new \Exception($mot[86]);
    }

Who doesn't love throwing messages by hard-coded array indexes in your array of possible error messages? Couldn't be bothered with a constant, could we? Nope, message 86 it is.

But don't worry about that exception going uncaught. Remember, this whole thing was inside of a try:

} catch (\Exception $e) {
    $data['msgfinal'] = "Votre Messsage n'est pas bien envoyer";
    /*$data = array('danger' => 'Votre Messsage pas bien envoyer', 'postData' => $_POST);*/
}

Yeah, it didn't matter what message we picked, because we just catch the exception and hard-code out an error message.

Also, I don't speak French, but is "message" supposed to have an extra "s" in it?

Charles writes:

It’s crazy to see such sloppy work on a platform that seems okay at first glance. Honestly, this platform is the holy grail of messy code—it could have its own course on how not to code because of how bad and error-prone it is. There are also even worse scenarios of bad code, but it's too long to share, and honestly, they're too deep and fundamentally ingrained in the system to even begin explaining.

Oh, I'm sure we could explain it. The explanation may be "there was a severe and fatal lack of oxygen in the office, and this is what hypoxia looks like in code," but I'm certain there'd be an explanation.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

David Brin 2The Ancient Ones, Chapter 1

A space comedy by David Brin

The illustrated online chapter version

Dedicated to You-Know-Who.

Seriously, you know who you are,

and what you did.

***

Part One

Chapter 1

So, you’ve decided to come down here, slumming. Almost finished with your Academy training and raring to go hit the old galaxy, squeezing it for adventure, right?

And you heard about a senior class tradition. Head down to a spaceport bar where retired characters hang out, with implausible stories to tell. Things never taught to human cadets. Not in formal class, that is.

Did they also tell you Academy administrators don’t approve? That you may get docked pay or pick up a demerit from Old Gasbag? Or that Professor-Admiral Bloodsucker might do something even more painful to your tender, human necks, if she catches you down here?

Don’t care? Well, well. Such a daring lot of eager lasses and lads. And you bought the first of several rounds. So…

All right then. You’re paying. I’ll drink n’ tell.

And if you snicker at my professorial tone, well I was a lecturer up there on the hill, for many years. Not that you shavetails are in any position to judge. So just shut up and listen while this old brain calls up those ancient tracks…

Only a few human beings qualify for this job. You students, the elite of our race – (may Yah-Tze pity us) – are being trained for a difficult and dangerous task vital to the survival of our world and many others.

For those finally chosen to serve, the demands will be heavy and unending.

First – above all other requirements – you have to like demmies.

I mean really like them.

Try to imagine spending a voyage of several years crammed in tight quarters with over two hundred of those brash, rambunctious, impulsive, affectionate, abrasive and maddening creatures, sharing constant peril while daily enduring their puckish, brilliant, idiotic, mercurial, and always astonishing natures. It would drive any normal man or woman to gibbering distraction.

Against such pressures, the Human Advisor aboard a demmie ship must always display the legendary Earthling traits of calmness, reason and restraint. Plus – heaven help us – genuine affection for the impossible creatures.

At times, this fondness may be your only anchor. Your sole hope.

Everyone knows that love and hate are cousins. And so, while I remain loyal – even now – to my demmie captain and crewmates, there were days when some infuriating antic left me frazzled to the bone. Times when I found that I could fathom the very different attitude chosen by our Spertin foes, who wish to roast every demmie slowly, over a neutron star.

When such moments come, you must take a deep breath, count to ten, and find reserves of patience deeper than a nebula. More often than not, it’s worth it.

Demmies love nicknames. They have one for the human race, calling us “the Ancient Ones.”

From their point of view, it’s obvious. Not only do we live much longer as individuals, with lifespans of a hundred or more Earth years, but from the demmie perspective, our people have been roaming the galaxy since time immemorial.

Well, after all, most member-races of the Federated Alliance learned starflight from us… as demmies did, when we contacted their world, fifty-eight years after our first ships departed the Solar System.

That’s how much longer we roamed the star lanes. Fifty-eight years. And for this they deferentially call us Ancient Ones.

Sure. Why not?

The first rule to remember, you youngsters – a rule even more important than the Choice Imperative – is to let demmies have it their way.

But you came down here to patronize an old man, pretending to learn from his experience. So. Keep my glass full. And don’t snicker when I slip into present-tense. The memories are that strong.

Let me tell you about the time our good ship – Clever Gamble – entered orbit above a planet of the system, Oxytoxin 41.

I was at my science station, performing routine scans, when Captain Ohm inquired about signs of intelligent life.

There is a technic civilization,” I explained…

***

Chapter 2

“There is a technic civilization, Captain. Scanners reveal a sophisticated network of roads, moderate electromagnetic activity, indicative of…”

“Never mind the details, Doctor Montessori,” our commander interrupted, leaping out of his slouch-chair and bounding over to my station. At five and a half feet, he was tall for a demmie. Still, I made certain to stoop a little, giving him the best light.

“Are they over sixteen on the Turgenev Scale?” he asked urgently. “Can we make contact?”

“Contact. Hm.” I rubbed my chin, a human mannerism that our crew expected from their Earthling advisor. “I would say so, Captain, though to be precise…”

“Great! Let’s go on down then.”

I tried entreating. “What’s the hurry? Why not spend a day or two collecting data? It never hurts to know what we’re stepping into.”

The captain grinned, belying his humanoid likeness by exposing twin rows of brilliant, pointy teeth.

“That’s all right, Advisor, I’ve had slippery boots before. Never stopped me yet!”

The crude witticism triggered laughter from other demmies in the command center. They often find my expressions of caution amusing, even when I later prove to be right. Fortunately, they are also fair-minded, and never confuse caution with cowardice.

Remember students, around demmies feel free to act “prudently wise.” Go ahead and urge restraint, since this is true to the image they have of us.

But never display outright fear. They find it upsetting. And we don’t want them upset.

“Break out the hose!” Captain Ohm commanded, rubbing his hands. “Tell Guts and Nuts to meet us at the spigot. Come on, Doc. We’re going down!”

***

THE ANCIENT ONES continues online… in Part Two

Impatient to read the rest?  Order The Ancient Ones.

Comments welcome below.

================================================

© 2019, 2024 David Brin. Cover & interiors prompt-designed by Eric Storm. More interiors by Patrick Farley

,

Cryptogram NSO Group Spies on People on Behalf of Governments

The Israeli company NSO Group sells Pegasus spyware to countries around the world (including countries like Saudi Arabia, UAE, India, Mexico, Morocco and Rwanda). We assumed that those countries use the spyware themselves. Now we’ve learned that that’s not true: that NSO Group employees operate the spyware on behalf of their customers.

Legal documents released in ongoing US litigation between NSO Group and WhatsApp have revealed for the first time that the Israeli cyberweapons maker ­ and not its government customers ­ is the party that “installs and extracts” information from mobile phones targeted by the company’s hacking software.

Cryptogram What Graykey Can and Can’t Unlock

This is from 404 Media:

The Graykey, a phone unlocking and forensics tool that is used by law enforcement around the world, is only able to retrieve partial data from all modern iPhones that run iOS 18 or iOS 18.0.1, which are two recently released versions of Apple’s mobile operating system, according to documents describing the tool’s capabilities in granular detail obtained by 404 Media. The documents do not appear to contain information about what Graykey can access from the public release of iOS 18.1, which was released on October 28.

More information:

Meanwhile, Graykey’s performance with Android phones varies, largely due to the diversity of devices and manufacturers. On Google’s Pixel lineup, Graykey can only partially access data from the latest Pixel 9 when in an “After First Unlock” (AFU) state—where the phone has been unlocked at least once since being powered on.

Cryptogram Security Analysis of the MERGE Voting Protocol

Interesting analysis: An Internet Voting System Fatally Flawed in Creative New Ways.

Abstract: The recently published “MERGE” protocol is designed to be used in the prototype CAC-vote system. The voting kiosk and protocol transmit votes over the internet and then transmit voter-verifiable paper ballots through the mail. In the MERGE protocol, the votes transmitted over the internet are used to tabulate the results and determine the winners, but audits and recounts use the paper ballots that arrive in time. The enunciated motivation for the protocol is to allow (electronic) votes from overseas military voters to be included in preliminary results before a (paper) ballot is received from the voter. MERGE contains interesting ideas that are not inherently unsound; but to make the system trustworthy—to apply the MERGE protocol—would require major changes to the laws, practices, and technical and logistical abilities of U.S. election jurisdictions. The gap between theory and practice is large and unbridgeable for the foreseeable future. Promoters of this research project at DARPA, the agency that sponsored the research, should acknowledge that MERGE is internet voting (election results rely on votes transmitted over the internet except in the event of a full hand count) and refrain from claiming that it could be a component of trustworthy elections without sweeping changes to election law and election administration throughout the U.S.

Cryptogram The Scale of Geoblocking by Nation

Interesting analysis:

We introduce and explore a little-known threat to digital equality and freedom­websites geoblocking users in response to political risks from sanctions. U.S. policy prioritizes internet freedom and access to information in repressive regimes. Clarifying distinctions between free and paid websites, allowing trunk cables to repressive states, enforcing transparency in geoblocking, and removing ambiguity about sanctions compliance are concrete steps the U.S. can take to ensure it does not undermine its own aims.

The paper: “Digital Discrimination of Users in Sanctioned States: The Case of the Cuba Embargo“:

Abstract: We present one of the first in-depth and systematic end-user centered investigations into the effects of sanctions on geoblocking, specifically in the case of Cuba. We conduct network measurements on the Tranco Top 10K domains and complement our findings with a small-scale user study with a questionnaire. We identify 546 domains subject to geoblocking across all layers of the network stack, ranging from DNS failures to HTTP(S) response pages with a variety of status codes. Through this work, we discover a lack of user-facing transparency; we find 88% of geoblocked domains do not serve informative notice of why they are blocked. Further, we highlight a lack of measurement-level transparency, even among HTTP(S) blockpage responses. Notably, we identify 32 instances of blockpage responses served with 200 OK status codes, despite not returning the requested content. Finally, we note the inefficacy of current improvement strategies and make recommendations to both service providers and policymakers to reduce Internet fragmentation.

Cryptogram Secret Service Tracking People’s Locations without Warrant

This feels important:

The Secret Service has used a technology called Locate X which uses location data harvested from ordinary apps installed on phones. Because users agreed to an opaque terms of service page, the Secret Service believes it doesn’t need a warrant.

Cryptogram Steve Bellovin’s Retirement Talk

Steve Bellovin is retiring. Here’s his retirement talk, reflecting on his career and what the cybersecurity field needs next.

Planet DebianIan Jackson: The Rust Foundation's 2nd bad draft trademark policy

tl;dr: The Rust Foundation’s new trademark policy still forbids unapproved modifications: this would forbid both the Rust Community’s own development work(!) and normal Free Software distribution practices.

Background

In April 2023 I wrote about the Rust Foundation’s ham-fisted and misguided attempts to update the Rust trademark policy. This turned into drama.

The new draft

Recently, the Foundation published a new draft. It’s considerably less bad, but the most serious problem, which I identified last year, remains.

It prevents redistribution of modified versions of Rust, without pre-approval from the Rust Foundation. (Subject to some limited exceptions.) The people who wrote this evidently haven’t realised that distributing modified versions is how free software development works. Ie, the draft Rust trademark policy even forbids making a github branch for an MR to contribute to Rust!

It’s also very likely unacceptable to Debian. Rust is still on track to repeat the Firefox/Iceweasel debacle.

Below is a copy of my formal response to the consultation. The consultation closes at 07:59:00 UTC tomorrow (21st November), ie, at the end of today (Wednesday) US Pacific time, so if you want to reply, do so quickly.

My consultation response

Hi. My name is Ian Jackson. I write as a Rust contributor and as a Debian Developer with first-hand experience of Debian’s approach to trademarks. (But I am not a member of the Debian Rust Packaging Team.)

Your form invites me to state any blocking concerns. I’m afraid I have one:

PROBLEM

The policy on distributing modified versions of Rust (page 4, 8th bullet) is far too restrictive.

PROBLEM - ASPECT 1

On its face the policy forbids making a clone of the Rust repositories on a git forge, and pushing a modified branch there. That is publicly distributing a modified version of Rust.

I.e., the current policy forbids the Rust’s community’s own development workflow!

PROBLEM - ASPECT 2

The policy also does not meet the needs of Software-Freedom-respecting downstreams, including community Linux distributions such as Debian.

There are two scenarios (fuzzy, and overlapping) which provide a convenient framing to discuss this:

Firstly, in practical terms, Debian may need to backport bugfixes, or sometimes other changes. Sometimes Debian will want to pre-apply bugfixes or changes that have been contributed by users, and are intended eventually to go upstream, but are not included upstream in official Rust yet. This is a routine activity for a distribution. The policy, however, forbids it.

Secondly, Debian, as a point of principle, requires the ability to diverge from upstream if and when Debian decides that this is the right choice for Debian’s users. The freedom to modify is a key principle of Free Software. This includes making changes that the upstream project disapproves of. Some examples of this, where Debian has made changes, that upstream do not approve of, have included things like: removing user-tracking code, or disabling obsolescence “timebombs” that stop a particular version working after a certain date.

Overall, while alignment in values between Debian and Rust seems to be very good right now, modifiability it is a matter of non-negotiable principle for Debian. The 8th bullet point on page 4 of the PDF does not give Debian (and Debian’s users) these freedoms.

POSSIBLE SOLUTIONS

Other formulations, or an additional permission, seem like they would be able to meet the needs of both Debian and Rust.

The first thing to recognise is that forbidding modified versions is probably not necessary to prevent language ecosystem fragmentation. Many other programming languages are distributed under fully Free Software licences without such restrictive trademark policies. (For example, Python; I’m sure a thorough survey would find many others.)

The scenario that would be most worrying for Rust would be “embrace - extend - extinguish”. In projects with a copyleft licence, this is not a concern, but Rust is permissively licenced. However, one way to address this would be to add an additional permission for modification that permits distribution of modified versions without permission, but if the modified source code is also provided, under the original Rust licence.

I suggest therefore adding the following 2nd sub-bullet point to the 8th bullet on page 4:

  • changes which are shared, in source code form, with all recipients of the modified software, and publicly licenced under the same licence as the official materials.

This means that downstreams who fear copyleft have the option of taking Rust’s permissive copyright licence at face value, but are limited in the modifications they may make, unless they rename. Conversely downstreams such as Debian who wish to operate as part of the Free Software ecosystem can freely make modifications.

It also, obviously, covers the Rust Community’s own development work.

NON-SOLUTIONS

Some upstreams, faced with this problem, have offered Debian a special permission: ie, said that it would be OK for Debian to make modifications that Debian wants to. But Debian will not accept any Debian-specific permissions.

Debian could of course rename their Rust compiler. Debian has chosen to rename in the past: infamously, a similar policy by Mozilla resulted in Debian distributing Firefox under the name Iceweasel for many years. This is a PR problem for everyone involved, and results in a good deal of technical inconvenience and makework.

“Debian could seek approval for changes, and the Rust Foundation would grant that approval quickly”. This is unworkable on a practical level - requests for permission do not fit into Debian’s workflow, and the resulting delays would be unacceptable. But, more fundamentally, Debian rightly insists that it must have the freedom to make changes that the Foundation do not approve of. (For example, if a future Rust shipped with telemetry features Debian objected to.)

“Debian and Rust could compromise”. However, Debian is an ideological as well as technological project. The principles I have set out are part of Debian’s Foundation Documents - they are core values for Debian. When Debian makes compromises, it does so very slowly and with great deliberation, using its slowest and most heavyweight constitutional governance processes. Debian is not likely to want to engage in such a process for the benefit of one programming language.

“Users will get Rust from upstream”. This is currently often the case. Right now, Rust is moving very quickly, and by Debian standards is very new. As Rust becomes more widely used, more stable, and more part of the infrastructure of the software world, it will need to become part of standard, stable, reliable, software distributions. That means Debian.

(The consultation was a Google Forms page with a single text field, so the formatting isn’t great. I have edited the formatting very lightly to avoid rendering bugs here on my blog.)



comment count unavailable comments

Worse Than FailureCodeSOD: Plugin Acrobatics

Once upon a time, web browsers weren't the one-stop-shop for all kinds of possible content that they are today. Aside from the most basic media types, your browser depended on content plugins to display different media types. Yes, there was an era where, if you wanted to watch a video in a web browser, you may need to have QuickTime or… (shudder) Real Player installed.

As a web developer, you'd need to write code to check which plugins were installed. If they don't have Adobe Acrobat Reader installed, there's no point in serving them up a PDF file- you'll need instead to give them an install link.

Which brings us to Ido's submission. This code is intended to find the Acrobat Reader plugin version.

acrobatVersion: function GetAcrobatVersion() {
	// Check acrobat is Enabled or not and its version
	acrobatVersion = 0;
	if (navigator.plugins && navigator.plugins.length) {
		for (intLoop = 0; intLoop <= 15; intLoop++) {
			if (navigator.plugins[intLoop] != -1) {
				acrobatVersion = parseFloat(navigator.plugins[intLoop].version);
				isAcrobatInstalled = true;
				break;
			}
		}
	}
	else {...}
}

So, we start by checking for the navigator.plugins array. This is a wildly outdated thing to do, as the MDN is quite emphatic about, but I'm not going to to get hung up on that- this code is likely old.

But what I do want to pay attention to is that they check navigator.plugins.length. Then they loop across the set of plugins using a for loop. And don't use the length! They bound the loop at 15, arbitrarily. Why? No idea- I suspect it's for the same reason they named the variable intLoop and not i like a normal human.

Then they check to ensure that the entry at plugins[intLoop] is not equal to -1. I'm not sure what the expected behavior was here- if you're accessing an array out of bounds in JavaScript, I'd expect it to return undefined. Perhaps some antique version of Internet Explorer did something differently? Sadly plausible.

Okay, we've found something we believe to be a plugin, because it's not -1, we'll grab the version property off of it and… parseFloat. On a version number. Which ignores the fact that 1.1 and 1.10 are different versions. Version numbers, like phone numbers, are not actually numbers. We don't do arithmetic on them, treat them like text.

That done, we can say isAcrobatInstalled is true- despite the fact that we didn't check to see if this plugin was actually an Acrobat plugin. It could have been Flash. Or QuickTime.

Then we break out of the loop. A loop that, I strongly suspect, would only ever have one iteration, because undefined != -1.

So there we have it: code that doesn't do what it intends to, and even if it did, is doing it the absolute wrong way, and is also epically deprecated.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsBlack Hole Ethics

Author: David Barber Over the centuries it had become a tradition for the Immortal Emperor to wash away His guilt in ceremonies at the Schwarzschild radius. Since nothing escaped the event horizon of a black hole, awful secrets could be whispered there, cruelties, mistakes and bad karma consigned to the singularity, and history begun afresh. […]

The post Black Hole Ethics appeared first on 365tomorrows.

Planet DebianRussell Coker: Solving Spam and Phishing for Corporations

Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

Krebs on SecurityFintech Giant Finastra Investigating Data Breach

The financial technology firm Finastra is investigating the alleged large-scale theft of information from its internal file transfer platform, KrebsOnSecurity has learned. Finastra, which provides software and services to 45 of the world’s top 50 banks, notified customers of the security incident after a cybercriminal began selling more than 400 gigabytes of data purportedly stolen from the company.

London-based Finastra has offices in 42 countries and reported $1.9 billion in revenues last year. The company employs more than 7,000 people and serves approximately 8,100 financial institutions around the world. A major part of Finastra’s day-to-day business involves processing huge volumes of digital files containing instructions for wire and bank transfers on behalf of its clients.

On November 8, 2024, Finastra notified financial institution customers that on Nov. 7 its security team detected suspicious activity on Finastra’s internally hosted file transfer platform. Finastra also told customers that someone had begun selling large volumes of files allegedly stolen from its systems.

“On November 8, a threat actor communicated on the dark web claiming to have data exfiltrated from this platform,” reads Finastra’s disclosure, a copy of which was shared by a source at one of the customer firms.

“There is no direct impact on customer operations, our customers’ systems, or Finastra’s ability to serve our customers currently,” the notice continued. “We have implemented an alternative secure file sharing platform to ensure continuity, and investigations are ongoing.”

But its notice to customers does indicate the intruder managed to extract or “exfiltrate” an unspecified volume of customer data.

“The threat actor did not deploy malware or tamper with any customer files within the environment,” the notice reads. “Furthermore, no files other than the exfiltrated files were viewed or accessed. We remain focused on determining the scope and nature of the data contained within the exfiltrated files.”

In a written statement in response to questions about the incident, Finastra said it has been “actively and transparently responding to our customers’ questions and keeping them informed about what we do and do not yet know about the data that was posted.” The company also shared an updated communication to its clients, which said while it was still investigating the root cause, “initial evidence points to credentials that were compromised.”

“Additionally, we have been sharing Indicators of Compromise (IOCs) and our CISO has been speaking directly with our customers’ security teams to provide updates on the investigation and our eDiscovery process,” the statement continues. Here is the rest of what they shared:

“In terms of eDiscovery, we are analyzing the data to determine what specific customers were affected, while simultaneously assessing and communicating which of our products are not dependent on the specific version of the SFTP platform that was compromised. The impacted SFTP platform is not used by all customers and is not the default platform used by Finastra or its customers to exchange data files associated with a broad suite of our products, so we are working as quickly as possible to rule out affected customers. However, as you can imagine, this is a time-intensive process because we have many large customers that leverage different Finastra products in different parts of their business. We are prioritizing accuracy and transparency in our communications.

Importantly, for any customers who are deemed to be affected, we will be reaching out and working with them directly.”

On Nov. 8, a cybercriminal using the nickname “abyss0” posted on the English-language cybercrime community BreachForums that they’d stolen files belonging to some of Finastra’s largest banking clients. The data auction did not specify a starting or “buy it now” price, but said interested buyers should reach out to them on Telegram.

abyss0’s Nov. 7 sales thread on BreachForums included many screenshots showing the file directory listings for various Finastra customers. Image: Ke-la.com.

According to screenshots collected by the cyber intelligence platform Ke-la.com, abyss0 first attempted to sell the data allegedly stolen from Finastra on October 31, but that earlier sales thread did not name the victim company. However, it did reference many of the same banks called out as Finastra customers in the Nov. 8 post on BreachForums.

The original October 31 post from abyss0, where they advertise the sale of data from several large banks that are customers of a large financial software company. Image: Ke-la.com.

The October sales thread also included a starting price: $20,000. By Nov. 3, that price had been reduced to $10,000. A review of abyss0’s posts to BreachForums reveals this user has offered to sell databases stolen in several dozen other breaches advertised over the past six months.

The apparent timeline of this breach suggests abyss0 gained access to Finastra’s file sharing system at least a week before the company says it first detected suspicious activity, and that the Nov. 7 activity cited by Finastra may have been the intruder returning to exfiltrate more data.

Maybe abyss0 found a buyer who paid for their early retirement. We may never know, because this person has effectively vanished. The Telegram account that abyss0 listed in their sales thread appears to have been suspended or deleted. Likewise, abyss0’s account on BreachForums no longer exists, and all of their sales threads have since disappeared.

It seems improbable that both Telegram and BreachForums would have given this user the boot at the same time. The simplest explanation is that something spooked abyss0 enough for them to abandon a number of pending sales opportunities, in addition to a well-manicured cybercrime persona.

In March 2020, Finastra suffered a ransomware attack that sidelined a number of the company’s core businesses for days. According to reporting from Bloomberg, Finastra was able to recover from that incident without paying a ransom.

This is a developing story. Updates will be noted with timestamps. If you have any additional information about this incident, please reach out to krebsonsecurity @ gmail.com or at protonmail.com.

Planet DebianArnaud Rebillout: Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1
ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue

TL;DR

pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig

Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done! ✨ 🌟 ✨

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible
/home/me/.local/bin/ansible

$ ansible --version | head -1
ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | SUCCESS => {
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
}

# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | FAILED! => {
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
}

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done! ✨ 🌟 ✨

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

,

Planet DebianAurelien Jarno: AI crawlers should be smarter

It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.

Hint: Changing the display options does not provide more training data!

Planet DebianMelissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.

Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.

Short on Time?

  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.

TL;DR

This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):

  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.

Bringing together Linux display developers in the XDC 2024

While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.

Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)

Link: https://indico.freedesktop.org/event/6/contributions/383/

Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.

The final agenda covered five topics in the scheduled order:

  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).

Unpacking the Topics

Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.

From his notes, let’s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components.

Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.

  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.

Real-Time Scheduling Challenges

We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.

  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).

HDR/Color Management

This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.

Here’s a breakdown of the key points raised at this meeting:

  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.

Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!

Display Mux

During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.

  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback

In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.

  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.

To address this issue, we discussed several potential improvements:

  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.

By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You!

Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.

This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.

Stay tuned for future updates!

365 TomorrowsThe Minbar of Saladin

Author: Majoki “It was the most beautiful thing ever crafted.” “I’m sure it was, Akharini. But how can we steal it if it was destroyed almost seventy years ago?” Akharini stared at Nur. Though the hour was late and time was short, he wanted to tell him so much about the minbar of Saladin, of […]

The post The Minbar of Saladin appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Recursive Search

Sometimes, there's code so bad you simply know it's unused and never called. Bernard sends us one such method, in Java:

  /**
   * Finds a <code>GroupEntity</code> by group number.
   *
   * @param  group the group number.
   * @return the <code>GroupEntity</code> object.
   */
  public static GroupEntity find(String group) {
    return GroupEntity.find(group);
  }

This is a static method on the GroupEntity class called find, which calls a static method on the GroupEntity class called find, which calls a static method on the GroupEntity class called find and it goes on and on my friend.

Clearly, this is a mistake. Bernard didn't supply much more context, so perhaps the String was supposed to be turned into some other type, and there's an overload which would break the recursion. Regardless, there was an antediluvian ticket on the backlog requesting that the feature to allow finding groups via a search input that no one had yet worked on.

I'm sure they'll get around to it, once the first call finishes.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Cryptogram Why Italy Sells So Much Spyware

Interesting analysis:

Although much attention is given to sophisticated, zero-click spyware developed by companies like Israel’s NSO Group, the Italian spyware marketplace has been able to operate relatively under the radar by specializing in cheaper tools. According to an Italian Ministry of Justice document, as of December 2022 law enforcement in the country could rent spyware for €150 a day, regardless of which vendor they used, and without the large acquisition costs which would normally be prohibitive.

As a result, thousands of spyware operations have been carried out by Italian authorities in recent years, according to a report from Riccardo Coluccini, a respected Italian journalist who specializes in covering spyware and hacking.

Italian spyware is cheaper and easier to use, which makes it more widely used. And Italian companies have been in this market for a long time.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.

Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)

  • Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)

    • Faster handling of symmetric matrices by inv() and rcond()

    • Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()

    • Added solve_opts::force_sym option to solve() to force the use of the symmetric solver

    • More efficient handling of compound expressions by solve()

  • Added exporter specialisation for icube for the ARMA_64BIT_WORD case

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David BrinSo, what lessons did we learn? And what does the future hold?

Amid the all the hand-wringing, or wailing jeremiads, or triumphant op-eds out there, I’ll offer in this election post-mortem some perspectives that you’ll not see elsewhere. 

      But, before that - some flash bits.

 

First: a few have suggested Biden resign to make Kamala president for a month or so. Other than shifting Trump v.2.0’s number from 47 to 48, it would only bias 2028 unnecessarily, by locking her in as heir. Nah.

 

Second. I reiterate, there is one thing that Joe Biden could do – right now – that would upset the DC apple cart, and (presumably) be very much not to the Trump-Putin party’s liking. Last week I laid out how Biden might still – even now - affect the USA and world. And human destiny.

Third flash bit … how about some prediction cred? That Donald Trump has learned to never again appoint professionals or adults to office. Nearly all grownups from Trump v.1 (over 250 of them) eventually denounced him. (That one fact alone should have decided the election.) Sure enough, his announced cabinet appointments are almost all unqualified maniacs. But there’s a triple purpose to that – which I’ll describe at the end.

 

But that’s not what you came here for, today. You’ve been wallowing in election post-mortems, agonizing over how it could have come to this. 

 

So, after that wallow in futility and clichés, would you like some fresh views that actually bear some relation to reality? Some may disturb you.

 

 

== So… um… W.T.H. just happened? ==

 

 As VoteVets.org (Veterans dedicated to democracy and the rule of law) put it Wednesday: Moving forward and regaining the initiative requires us to confront the results of this election with open eyes.” 

 

Okay, for a start, it does no good to wail stuff like: “Americans chose a fascist dictatorship because trans kids are icky. And we hate the idea of a black woman being president.”

 

Um, not even close. Nor was Kamala Harris a ‘bad candidate’ (she was actually very good!) Nor was it because she ‘only had 107 days.’ Seriously? The campaign lasted forever!

 

Indeed, all over the globe (for the first time, ever), every governing party in a democracy lost vote share. So… maybe the Stupid-Ray Beamed By Aliens theory should be trotted back out? No, never mind that.

 

WTH actually happened?

 

Well, the demographics are clear. Major historically-Democratic groups simply did not show up, or else actively went to the GOP. While some Black men defected for reasons such as macho, they were mostly loyal, in the end. 


But Hispanics far more crucially (and of both sexes) stayed home or switched sides. And will you drop the racist slander, claiming that they’re all sexist? The new president of Mexico is a woman, fer gosh sakes. 

         

As for the trans thing, it was just one of so many hatred dog whistles. Useful to Fox and crappily/badly countered by the left. But it’s a side dish, compared to the Hispanic defection. 


Plus the fact that even white women split more evenly than expected.

 

Then what happened? 

 

TWO WORDS kind of sum it up! Two different one-word chants! Used repeatedly. One by each side.

 

For one side, that word was abortion.

 

Sure, the incredibly aggressive fascist putsch against Roe-v.-Wade and women’s Right-To-Choose was obscene and deserved to be deeply motivating. 

        Only then the Harris campaign transformed it from being a political gift by the Supreme Court into a liability. From being just one useful word into a million

Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion! And… Abortion!  (ad infinitum)

 

Dig it. All of the folks for whom that word was a deciding issue were already Kamala’s! Repeating it relentlessly and always - like an incantatory talisman - only made clear that hers would be a campaign exactly like Hillary Clinton’s -- led by and run by and aimed toward upper middle class white ladies. 

 

(And please? I voted for both Harris & Clinton and worked hard for them, in 2016, and again in 2024. We’re discussing tactics, here! And getting angry when failed tactics are criticized is a sure sign of a bad general.) 

 

Try asking outside that bell jar. After a while, each repetition (“abortion!!”) became tedious and hectoring to many. 


Especially to Hispanics, who -- may I remind you -- are mostly Catholics?

Who are capable of diverging from church doctrine… but did they need to be reminded of that cognitive dissonance hourly?

 

…Hispanic citizens who also proved very receptive to the other side’s talisman word. 

 

 ‘Immigration.’ 

 

This talisman worked to draw in support from fresh directions. Poll after poll show Hispanics want tighter borders! Yet, urban liberals refused to listen. Pompously finger-wagging that both Jesus and Deuteronomy preach kindness (they do!) toward the flood of refugees who are fleeing Trump ally elites in Honduras and Guatemala, they endlessly lectured and preached that the flood should ONLY be answered with kindness… and nothing but kindness.

 

… which is easy for you rich liberals to say. But Hispanic voters don’t want job competition. And your disapproval – calling them immoral when they shouted “No!” – helped to cement their shift.

 

 

== Immigration as a weapon vs. the West: It isn’t just America. ==

 

Did you ever wonder why right wing populism has surged in Europe?  Quasi-Nazism burgeoned there – and here – for one reason above all. Because Putin & pals have been driving refugees across western borders for 30 years, knowing that it’ll result – perfectly – in a rightward swerve of politics. 

 

You know this! It happened here. The tactic has now won Vladimir Putin the greatest victory of his life… that very likely saved his life! 

 

But you, yes you, have been unable to see it and draw two correct conclusions

 

First: you can’t have everything you want, not all at once. Politics requires prioritization. And hence when Obama and Biden built more border walls than Trump ever did, they ought to have bragged about it! And you should have bragged, too.

        Again, you cannot do all the good things on your list without POWER! And now, sanctimoniously refusing to prioritize has given total power to…

 

Second: and here’s a potential sweet spot for you: Want to solve the immigration crisis in the best way, to benefit both us and the poor refugees? 

 

Go after the Honduran/Guatemalan/Nicaraguan/Cuban/Venezuelan elites who are persecuting their own citizens and driving them – coordinated by Moscow - to flee to the U.S! 

 

Um, duh? Joe could still do that in his remaining time. He could! 

     But a blind spot is a blind spot…

     … and even now, YOU probably could not parse or paraphrase what I just said. About the possible win-win sweet spot. Go ahead and try. Bet you can’t even come close.

 

How much simpler to dismiss Brin as racist. And thus illustrate my point.

 

 

== More lessons learned… or not learned? ==

 

Polls showed that ECONOMICS were far more on peoples’ minds than abortion. In fact, in almost every meaningful category, the USA has, right now, the best economy in the whole world and one of the very best since WWII.  


 

  

 

Oh sure, saying that was difficult for Democrats. It comes across as lectury, pedantic and tone deaf to those working class folks who have no rising 401K, but do have high grocery bills. Or to young families staring at skyrocketing housing prices. Meanwhile, everyone is so accustomed to a great labor market that unemployment is a forgotten issue,

 

But does that mean to give up?

In fact, Kamala tried to get across this difficult perception gap by promising to end gouging at supermarkets and pharmacies and bragging about lowered insulin costs. But all of that seems to admit we failed, till now. So, maybe accompany all that with ads showing all the bridges and other infrastructure being rebuilt, at last, and asking “Do you know someone working at fixing America this way? Ask THEM about it!”

 

I found the rebirth of US manufacturing - the biggest boom since WWII – to be especially effective.  

 

As for housing costs, I never saw one attempt to blame it on real culprits - swarms of inheritance brats and their trusts who are snapping up houses right and left in cash purchases, free to ignore mortgage rates. I mean seriously?

 

Okay, I admit it’s hard to sell cynical working stiffs glued to Fox on the Good Economy. I won’t carp too much on that. Instead…

 

Of course, there’s so much anger around and someone is gonna receive it. So notice that the core Foxite campaign – pushed VASTLY more often than any message of racism or sexism – is to blame college folks with incited hatred of non-college folks. 

 

As I’ll say again, Kamala could have started changing this by pointing over there (as FDR did) at the real class enemies. The oligarchs who benefited from 40+ years of supply side and suck like lampreys from the hardworking U.S. middle class… both college and non-college.

 

 

 

== The insult that they deeply resent… repeated over and over again ==

 

Not one Democratic pol has ever pointed out that racism and sexism, while important ingredients in parts of the Red polity, are not their core agenda!  

 

Indeed, count how many of your friends and/or favorite pundits are ascribing this recent calamity to ‘embedded American racism and sexism!!”  

 

Sure, those despicable traits exist and matter a lot. And it’s easy for me to downgrade them when my life is in no danger because of a busted tail-light. 

 

Still, can you recognize an unhelpful mantra, when it is repeated way too much, crowding out all other thought?

 

As commentator Jamie Metzl put it: “There will be some people who awoke this morning telling themselves that the story of this election is primarily one of racism and misogyny. They are wrong. 

 

“Make no mistake, our country still harbors unacceptable levels of both, but that is not the story of this election. That is not who we are as a nation. We are the same nation that elected Barack Obama twice and would have likely have elected Nikki Haley, had she been the Republican candidate. Very many women and minorities voted for Trump. We need to look deeper.”

 

Indeed, howling “You’re racist!” at our red neighbors was likely counterproductive. They turn and point at black faces on Fox… and at the now-boring normality of inter-racial marriages… and more recently at ‘normal gays’… and reply

 

“I don’t FEEL racist!  I LIKE the good ones!” 

 

 None of that means racism is over! But except for a nasty-Nazi wing, they have largely shifted on a lot of things. What it does mean is that a vast majority of Republicans feel insulated from their racism. 

 

it means that shrieking the R word over and over can be futile.It only makes your neighbors dig in and call you the race-obsessed oppressor.

 

 

==  The actual enemy ==

 

I mean, have you ever actually watched FOX and tallied their openly racist or even sexist crap… versus who it is they actually assail most often, and openly? Care for a side bet on this?

 

I’ve said it before and will pound this till you listen. While they downplay their own racism and sexism, what almost every MAGA Nuremberg Rally does feature is endless – and utterly open -- attacks upon nerds.

 

The fact professions. From journalism to science to civil service to law to medicine to the FBI and the US military officer corps. 

        And NO Democrat has ever addressed that, head on! 

        Ever, ever, ever and ever.

        Instead of ever pointing this out, they assume that defending the fact professions will sound like smugness bragging. 

 

But they’re wrong. And there are reasons why the all out assault on all fact-professions is the core agenda underlying everything Republican/MAGA/Putinist.  

        And someday – as I mention below – this will at last be admitted openly. 

       Alas, too late, as beginning on day one of Trump v2 there will commence an all-out terror campaign against that officer corps, against the FBI and especially the United States Civil Service. And science.

       And when that happens, point proudly and tell your children: “I helped do that.”

 

 

== The giddy joy in Moscow ==

 

Oh, there are MAGAs who write to me – on social media etc. - taunting and gloating about all this. To which I reply: 

 

“Enjoy your victory. Your pleasure is as a firefly glow, next to the giddy ecstasy in the Kremlin.”

  



 

A few comments furthering that.

 

a.     Jill Stein deserves the Order of Lenin. She likely already has one in her secret Moscow luxury dacha.

 

b.    Recall the Four Scenarios I projected, last Sunday? As I predicted in scenario 4. If Trump wins convincingly, he will surround himself with loyalists and this time no ‘adults in the room.’ No Kelly, Mattis, Esper, Milley, or even hacks with a brain, like Barr or Tillerson or Pence.  

  

c.     What no one else has mentioned - or will - is how this cuts all puppet strings to Trump. Nothing Putin has on him… no pee tape, or snuff film, or video with Epstein kids… will matter anymore. Nor will blackmail files on Cruz or many others. All – even Lindsey Graham – will have their “I could shoot someone on 5th Avenue” moment. And when that happens…



 

        …there is a very real chance that Trump will feel liberated to tell Vlad to fuck himself. Or even take revenge on Putin for decades of puppetting control threats. I have repeatedly asked folks to learn from the wisdom of Hollywood! From the last ten minutes of CABARET. From Angela Lansbury’s soliloquy in the last half hour of THE MANCHURIAN CANDIDATE. From the oligarchs’ last resort in NETWORK. 

 

But no. I am dreaming. Putin will retain control. Even if blackmail ceases to work, there’s flattery, which is far more important to DT than anything else in the world. And liberals insanely ignore that.

 

 

== How will America respond to this Confederate conquest? ==

 

 

One of you, in our lively comments section below, said: 

Trump is who we are and we are not the great people we used to be.”



 

Malarkey. As I described here today, midway through this phase of the ever-recurring Civil War, it seems the Union side’s generals keep firing in mistaken directions. But do not look at this as “America is irredeemable.” 

 

View it as “America has once again been conquered by the entirely separate Confederacy, the recurring romantic-feudalist cult that now has its Russian and other foreign allies. Actual America is now occupied by that other entity."

 

But recall that it consists of almost all of the inhabitants of this land who know stuff. And if our mad cousins push too hard, mass resignations by the competent will only be the beginning.

 

And no, - even though we are the ones who know everything cyber, bio, medical, nano, nuclear and all that, it won’t come to any of that. 

 

Watch instead – if they go after the civil service and officer corps - for the words GENERAL STRIKE. And let’s see how they do without (among 10,000 other taken-for-granted things) their ten day weather reports. Especially when the parts of North America that will be the very worst-hit by climate Acts of God will be the Southeast.

 


== Why are there zero 'adults' in the newest DT administration? ==


      I promised to explain why Trump's announced cabinet appointments are almost all unqualified maniacs. There’s a triple purpose


1. This will maximize flattery (what Trump lives for), but this time he's only chosen appointees who are blackmailed and controllable. In some cases, Russian assets now appointed atop U.S. intel and defense agencies.


2.  Unlike all the 'adults' in Trump v.1, this time every single person named will join in the coming all-out war against the FBI, military officer corps and U.S. Civil Service.


3. Unlike all the 'adults' in Trump v.1 none of these will never denounce him in tell-all books.



== Side bits ==

 

Tariffs? Oh, dear oligarchs, try some wisdom from a surprising portion of Ferris Beuller’s Day Off! 


John Cramer points out that Joe Biden, as part of the Great Big Infrastructure Rebuild, boosted access of poor and rural areas to high speed Internet... "There is evidence that better access to the many disinformation sites shifted many rural counties from pink to deep red."


Also Cramer: "Botched Trumpian responses made Covid far worse. (And the best way for you to begin using wager demands would ber to demand cash bets over DEATH RATES for the vaccinated vs. un-vaccinated.) "When COVID hit, Trump arranged to sign the big relief checks. Under Biden (who din't sign the checks) this tapered too soon. Strapped voters remembered the "good old days" when Trump sent checks and the grocery prices were lower." Hm, that seems a reach but...


Above all I reiterate, there is one thing that Joe Biden could do – right now – that would upset the DC apple cart, and (presumably) be very much not to the Trump-Putin party’s liking. Last week I laid out how Biden might still – even now - affect the USA and world. And human destiny.

 


 

== So, what lessons did we learn?  And what does the future hold?  ==

 

Geez, you’re asking me? My predictive score is way above average, but I truly thought a critical mass of Americans would reject an orange-painted, makeup-slathered raving-loony carnival barker. I was wrong about that…

 

… but it only shows how stoopid so many millions of sincerely generous and college -educated Americans are, for assuming they know who the gone-treasonously-mad right is oppressing.

 

Wake up, educated gals & guys and gays and every other variant under the sun. It’s not your diversity they are coming after. Nor the client races and genders you defend, many of whom just said ‘fuck off!’ to being protected by you.

 

The oligarchs and their minions have one enemy they aim to destroy.

 

It’s you.


Planet DebianC.J. Collier: Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \
      | gpg --no-default-keyring --keyring "${KR}" --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "${KR}" 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)

pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1

pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
  | sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status

HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None

$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail

HPE Smart Array P408i-p SR Gen10 in Slot 3

   Array A

      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238

Planet DebianPhilipp Kern: debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

Cryptogram Most of 2023’s Top Exploited Vulnerabilities Were Zero-Days

Zero-day vulnerabilities are more commonly used, according to the Five Eyes:

Key Findings

In 2023, malicious cyber actors exploited more zero-day vulnerabilities to compromise enterprise networks compared to 2022, allowing them to conduct cyber operations against higher-priority targets. In 2023, the majority of the most frequently exploited vulnerabilities were initially exploited as a zero-day, which is an increase from 2022, when less than half of the top exploited vulnerabilities were exploited as a zero-day.

Malicious cyber actors continue to have the most success exploiting vulnerabilities within two years after public disclosure of the vulnerability. The utility of these vulnerabilities declines over time as more systems are patched or replaced. Malicious cyber actors find less utility from zero-day exploits when international cybersecurity efforts reduce the lifespan of zero-day vulnerabilities.

Worse Than FailureCodeSOD: Objectified

Simon recently found himself working alongside a "very senior" developer- who had a whopping 5 years of experience. This developer was also aggrieved that in recent years, Object Oriented programming had developed a bad reputation. "Functional this, functional that, people really just don't understand how clean and clear objects make your code."

For example, here are a few Java objects which they wrote to power a web scraping tool:

class UrlHolder {

    private String url;

    public UrlHolder(String url) {
        this.url = url;
    }
}

class UrlDownloader {

    private UrlHolder url;
    public String downloadPage;

    public UrlDownLoader(String url) {
        this.url = new UrlHolder(Url);
    }
}

class UrlLinkExtractor {

   private UrlDownloader url;

   public UrlLinkExtractor(UrlDownloader url) {
        this.url = url;
   }

   public String[] extract() {
       String page = Url.downloadPage;
       ...
   }
}

UrlHolder is just a wrapper around string, but also makes that string private and provides no accessors. Anything shoved into an instance of that may as well be thrown into oblivion.

UrlDownloader wraps a UrlHolder, again, as a private member with no accessors. It also has a random public string called downloadPage.

UrlLinkExtractor wraps a UrlDownloader, and at least UrlLinkExtractor has a function- which presumably downloads the page. It uses UrlDownloader#downloadPage- the public string property. It doesn't use the UrlHolder, because of course it couldn't. The entire goal of this code is to pass a string to the extract function.

I guess I don't understand object oriented programming. I thought I did, but after reading this code, I don't.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsOn the Slagpiles of Mars

Author: Julian Miles, Staff Writer The wind’s picking up, keening between the stacks. If it gets any stronger, we’ll have to retreat. “Alpha Seven, your favourite scout’s offline.” “How long?” “Nearly ten minutes.” Switching myself to wide-hail, I call out. “Team Seven, Scully’s dropped out. Who had last contact?” There’s rapid chatter back and forth. […]

The post On the Slagpiles of Mars appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Delilah Green Doesn't Care

Review: Delilah Green Doesn't Care, by Ashley Herring Blake

Series: Bright Falls #1
Publisher: Jove
Copyright: February 2022
ISBN: 0-593-33641-0
Format: Kindle
Pages: 374

Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.

Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.

Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.

Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.

I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.

It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.

This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.

This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.

The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.

This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.

I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.

Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.

Followed by Astrid Parker Doesn't Fail.

Rating: 9 out of 10

,

Planet DebianRuss Allbery: Review: Dark Deeds

Review: Dark Deeds, by Michelle Diener

Series: Class 5 #2
Publisher: Eclipse
Copyright: January 2016
ISBN: 0-6454658-4-4
Format: Kindle
Pages: 340

Dark Deeds is the second book of the self-published Class 5 science fiction romance series. It is a sequel to Dark Horse and will spoil the plot of that book, but it follows the romance series convention of switching to a new protagonist in the same universe and telling a loosely-connected story.

Fiona, like Rose in the previous book, was kidnapped by the Tecran in one of their Class 5 ships, although that's not entirely obvious at the start of the story. The book opens with her working as a slave on a Garmman trading ship while its captain works up the nerve to have her killed. She's spared this fate when the ship is raided by Krik pirates. Some brave fast-talking, and a touch of honor among thieves, lets her survive the raid and be rescued by a pursuing Grih battleship, with a useful electronic gadget as a bonus.

The author uses the nickname "Fee" for Fiona throughout this book and it was like nails on a chalkboard every time. I had to complain about that before getting into the review.

If you've read Dark Horse, you know the formula: lone kidnapped human woman, major violations of the laws against mistreatment of sentient beings that have the Grih furious on her behalf, hunky Grih starship captain who looks like a space elf, all the Grih are fascinated by her musical voice, she makes friends with a secret AI... Diener found a formula that worked well enough that she tried it again, and it would not surprise me if the formula repeated through the series. You should not go into this book expecting to be surprised.

That said, the formula did work the first time, and it largely does work again. I thoroughly enjoyed Dark Horse and wanted more, and this is more, delivered on cue. There are worse things, particularly if you're a Kindle Unlimited reader (I am not) and are therefore getting new installments for free. The Tecran fascination with kidnapping human women is explained sufficiently in Fiona's case, but I am mildly curious how Diener will keep justifying it through the rest of the series. (Maybe the formula will change, but I doubt it.)

To give Diener credit, this is not a straight repeat of the first book. Fiona is similar to Rose but not identical; Rose had an unshakable ethical calm, and Fiona is more of a scrapper. The Grih are not stupid and, given the amount of chaos Rose unleashed in the previous book, treat the sudden appearance of another human woman with a great deal more caution and suspicion. Unfortunately, this also means far less of my favorite plot element of the first book: the Grih being constantly scandalized and furious at behavior the protagonist finds sadly unsurprising.

Instead, this book has quite a bit more action. Dark Horse was mostly character interactions and tense negotiations, with most of the action saved for the end. Dark Deeds replaces a lot of the character work with political plots and infiltrating secret military bases and enemy ships. The AI (named Eazi this time) doesn't show up until well into the book and isn't as much of a presence as Sazo. Instead, there's a lot more of Fiona being drafted into other people's fights, which is entertaining enough while it's happening but which wasn't as delightful or memorable as Rose's story.

The writing continues to be serviceable but not great. It's a bit cliched and a bit awkward.

Also, Diener uses paragraph breaks for emphasis.

It's hard to stop noticing it once you see it.

Thankfully, once the story gets going and there's more dialogue, she tones that down, or perhaps I stopped noticing. It's that kind of book (and that kind of series): it's a bit rough to get started, but then there's always something happening, the characters involve a whole lot of wish-fulfillment but are still people I like reading about, and it's the sort of unapologetic "good guys win" type of light science fiction that is just the thing when one simply wants to be entertained. Once I get into the book, it's easy to overlook its shortcomings.

I spent Dark Horse knowing roughly what would happen but wondering about the details. I spent Dark Deeds fairly sure of the details and wondering when they would happen. This wasn't as fun of an experience, but the details were still enjoyable and I don't regret reading it. I am hoping that the next book will be more of a twist, or will have a character more like Rose (or at least a character with a better nickname). Sort of recommended if you liked Dark Horse and really want more of the same.

Followed by Dark Minds, which I have already purchased.

Rating: 6 out of 10

365 TomorrowsThe Spark

Author: Alastair Millar The Government Men arrived in the early morning, before Papa had even left for work. Mama, crying, sat in the kitchen listening to the voices in the living room; it was only her restraining hand that prevented her daughter Cassie, home for college vacation, from storming in to join the discussion. “You […]

The post The Spark appeared first on 365tomorrows.

,

365 TomorrowsTerminal Lucidity

Author: Don Nigroni Yesterday on Christmas Day, I was at my filthy rich, albeit eccentric, uncle’s house. And that’s when and where everything went awry. After dinner, he took me aside to his library to enjoy a cigar and a tawny port. “We know our current materialistic paradigm is pure garbage, yet we still cling […]

The post Terminal Lucidity appeared first on 365tomorrows.

,

365 TomorrowsBlood In The Water

Author: Nell Carlson The girl died. Normally, that would have been the end of it. Thousands of people died every day and millions had died in The Culling and nothing especially unusual happened afterwards. But the girl had died on the black river at the same time millions of people had been praying in remembrance […]

The post Blood In The Water appeared first on 365tomorrows.

Worse Than FailureError'd: Tangled Up In Blue

...Screens of Death. Photos of failures in kiosk-mode always strike me as akin to the wizard being exposed behind his curtain. Yeah, that shiny thing is after all just some Windows PC on a stick. Here are a few that aren't particularly recent, but they're real.

Jared S. augurs ill: "Seen in downtown Mountain View, CA: In Silicon Valley AI has taken over. There is no past, there is no future, and strangely, even the present is totally buggered. However, you're free to restore the present if you wish."

0

 

Windows crashed Maurizio De Cecco's party and he is vexé. "Some OS just doesn’t belong in the parisian nightlife," he grumbled. But neither does pulled pork barbecue and yet there it is.

1

 

Máté cut Windows down cold. "Looks like the glaciers are not the only thing frozen at Matterhorn Glacier Paradise..."

2

 

Thomas found an installer trying to apply updates "in the Northwestern University's visitor welcome center, right smack in the middle of a nine-screen video display. I can only imagine why they might have iTunes or iCloud installed on their massive embedded display." I certainly can't.

3

 

Finally, Charles T. found a fast-food failure and was left entirely wordless. And hungry.

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Krebs on SecurityAn Interview With the Target & Home Depot Hacker

In December 2023, KrebsOnSecurity revealed the real-life identity of Rescator, the nickname used by a Russian cybercriminal who sold more than 100 million payment cards stolen from Target and Home Depot between 2013 and 2014. Moscow resident Mikhail Shefel, who confirmed using the Rescator identity in a recent interview, also admitted reaching out because he is broke and seeking publicity for several new money making schemes.

Mikhail “Mike” Shefel’s former Facebook profile. Shefel has since legally changed his last name to Lenin.

Mr. Shefel, who recently changed his legal surname to Lenin, was the star of last year’s story, Ten Years Later, New Clues in the Target Breach. That investigation detailed how the 38-year-old Shefel adopted the nickname Rescator while working as vice president of payments at ChronoPay, a Russian financial company that paid spammers to advertise fake antivirus scams, male enhancement drugs and knockoff pharmaceuticals.

Mr. Shefel did not respond to requests for comment in advance of that December 2023 profile. Nor did he respond to reporting here in January 2024 that he ran an IT company with a 34-year-old Russian man named Aleksandr Ermakov, who was sanctioned by authorities in Australia, the U.K. and U.S. for stealing data on nearly 10 million customers of the Australian health insurance giant Medibank.

But not long after KrebsOnSecurity reported in April that Shefel/Rescator also was behind the theft of Social Security and tax information from a majority of South Carolina residents in 2012, Mr. Shefel began contacting this author with the pretense of setting the record straight on his alleged criminal hacking activities.

In a series of live video chats and text messages, Mr. Shefel confirmed he indeed went by the Rescator identity for several years, and that he did operate a slew of websites between 2013 and 2015 that sold payment card data stolen from Target, Home Depot and a number of other nationwide retail chains.

Shefel claims the true mastermind behind the Target and other retail breaches was Dmitri Golubov, an infamous Ukrainian hacker known as the co-founder of Carderplanet, among the earliest Russian-language cybercrime forums focused on payment card fraud. Mr. Golubov could not be reached for comment, and Shefel says he no longer has the laptop containing evidence to support that claim.

Shefel asserts he and his team were responsible for developing the card-stealing malware that Golubov’s hackers installed on Target and Home Depot payment terminals, and that at the time he was technical director of a long-running Russian cybercrime community called Lampeduza.

“My nickname was MikeMike, and I worked with Dmitri Golubov and made technologies for him,” Shefel said. “I’m also godfather of his second son.”

Dmitri Golubov, circa 2005. Image: U.S. Postal Investigative Service.

A week after breaking the story about the 2013 data breach at Target, KrebsOnSecurity published Who’s Selling Cards from Target?, which identified a Ukrainian man who went by the nickname Helkern as Rescator’s original identity. But Shefel claims Helkern was subordinate to Golubov, and that he was responsible for introducing the two men more than a decade ago.

“Helkern was my friend, I [set up a] meeting with Golubov and him in 2013,” Shefel said. “That was in Odessa, Ukraine. I was often in that city, and [it’s where] I met my second wife.”

Shefel claims he made several hundred thousand dollars selling cards stolen by Golubov’s Ukraine-based hacking crew, but that not long after Russia annexed Crimea in 2014 Golubov cut him out of the business and replaced Shefel’s malware coding team with programmers in Ukraine.

Golubov was arrested in Ukraine in 2005 as part of a joint investigation with multiple U.S. federal law enforcement agencies, but his political connections in the country ensured his case went nowhere. Golubov later earned immunity from prosecution by becoming an elected politician and founding the Internet Party of Ukraine, which called for free internet for all, the creation of country-wide “hacker schools” and the “computerization of the entire economy.”

Mr. Shefel says he stopped selling stolen payment cards after being pushed out of the business, and invested his earnings in a now-defunct Russian search engine called tf[.]org. He also apparently ran a business called click2dad[.]net that paid people to click on ads for Russian government employment opportunities.

When those enterprises fizzled out, Shefel reverted to selling malware coding services for hire under the nickname “Getsend“; this claim checks out, as Getsend for many years advertised the same Telegram handle that Shefel used in our recent chats and video calls.

Shefel acknowledged that his outreach was motivated by a desire to publicize several new business ventures. None of those will be mentioned here because Shefel is already using my December 2023 profile of him to advertise what appears to be a pyramid scheme, and to remind others within the Russian hacker community of his skills and accomplishments.

Shefel says he is now flat broke, and that he currently has little to show for a storied hacking career. The Moscow native said he recently heard from his ex-wife, who had read last year’s story about him and was suddenly wondering where he’d hidden all of his earnings.

More urgently, Shefel needs money to stay out of prison. In February, he and Ermakov were arrested on charges of operating a short-lived ransomware affiliate program in 2021 called Sugar (a.k.a. Sugar Locker), which targeted single computers and end-users instead of corporations. Shefel is due to face those charges in a Moscow court on Friday, Nov. 15, 2024. Ermakov was recently found guilty and given two years probation.

Shefel claims his Sugar ransomware affiliate program was a bust, and never generated any profits. Russia is known for not prosecuting criminal hackers within its borders who scrupulously avoid attacking Russian businesses and consumers. When asked why he now faces prosecution over Sugar, Shefel said he’s certain the investigation was instigated by  Pyotr “Peter” Vrublevsky — the son of his former boss at ChronoPay.

ChronoPay founder and CEO Pavel Vrublevsky was the key subject of my 2014 book Spam Nation, which described his role as head of one of Russia’s most notorious criminal spam operations.

Vrublevsky Sr. recently declared bankruptcy, and is currently in prison on fraud charges. Russian authorities allege Vrublevsky operated several fraudulent SMS-based payment schemes. They also accused Vrublevsky of facilitating money laundering for Hydra, the largest Russian darknet market at the time. Hydra trafficked in illegal drugs and financial services, including cryptocurrency tumbling for money laundering, exchange services between cryptocurrency and Russian rubles, and the sale of falsified documents and hacking services.

However, in 2022 KrebsOnSecurity reported on a more likely reason for Vrublevsky’s latest criminal charges: He’d been extensively documenting the nicknames, real names and criminal exploits of Russian hackers who worked with the protection of corrupt officials in the Russian Federal Security Service (FSB), and operating a Telegram channel that threatened to expose alleged nefarious dealings by Russian financial executives.

Shefel believes Vrublevsky’s son Peter paid corrupt cops to levy criminal charges against him after reporting the youth to Moscow police, allegedly for walking around in public with a loaded firearm. Shefel says the Russian authorities told the younger Vrublevsky that he had lodged the firearms complaint.

In July 2024, the Russian news outlet Izvestia published a lengthy investigation into Peter Vrublevsky, alleging that the younger son took up his father’s mantle and was responsible for advertising Sprut, a Russian-language narcotics bazaar that sprang to life after the Hydra darknet market was shut down by international law enforcement agencies in 2022.

Izvestia reports that Peter Vrublevsky was the advertising mastermind behind this 3D ad campaign and others promoting the Russian online narcotics bazaar Sprut.

Izvestia reports that Peter Vrublevsky is currently living in Switzerland, where he reportedly fled in 2022 after being “arrested in absentia” in Russia on charges of running a violent group that could be hired via Telegram to conduct a range of physical attacks in real life, including firebombings and muggings.

Shefel claims his former partner Golubov was involved in the development and dissemination of early ransomware strains, including Cryptolocker, and that Golubov remains active in the cybercrime community.

Meanwhile, Mr. Shefel portrays himself as someone who is barely scraping by with the few odd coding jobs that come his way each month. Incredibly, the day after our initial interview via Telegram, Shefel proposed going into business together.

By way of example, he suggested maybe a company centered around recovering lost passwords for cryptocurrency accounts, or perhaps a series of online retail stores that sold cheap Chinese goods at a steep markup in the United States.

“Hi, how are you?” he inquired. “Maybe we can open business?”

,

Sociological ImagesWho’s Not Cool With AC?

This past summer was hot, hotter than it used to be, and this is causing a lot of new challenges for work, infrastructure, our social lives, and our health. Air conditioning was back in style and even a new public policy, with more cities working to require that landlords provide it as a basic part of a habitable apartment.

Of course the stakes are much higher than just a new AC unit. Sociologists have long known that unequal heat exposure is a serious challenge to our collective health and social wellbeing. Eric Klinenberg’s famous study of the 1995 Chicago heatwave, for example, found that social isolation was a key factor in explaining why people were vulnerable to heat sickness and even death, because they didn’t have places to go or people to check in on them to stay cool. Recent work has linked excessive heat to deaths among people who are incarcerated and learning loss in schools. Heat risks are unevenly distributed in our society, and so addressing the risks of a warmer planet is going to require expanded access to building cooling and air conditioning.

The challenge is that the status of air conditioning is changing. Heat has long been considered a necessity for safe, healthy living – often part of the basic, legal requirements for habitable homes on the rental market across the country. But states are much more inconsistent about whether they require air conditioning, which is often marketed to the general public as a “luxury good.” Look at any vintage ad for AC and you’ll find wealthy, well-dressed homeowners splurging on a new system that lets you wear a suit inside.

Do people today actually support aid to help others access cooling? In a new study recently published in Socius, I investigated this with an original survey experiment. In a sample of 1200 respondents drawn from Prolific, I asked about support for government utility assistance programs for people with lower incomes. The questions had a key difference: some respondents got a question about utility assistance in general, some got a question specifically about home heating, and some got a question specifically about home air conditioning.

Support for the heating question was the strongest on average, in line with the theory that we see heating as a necessity. Air conditioning received the lowest support, however, significantly different from both heat and general utility assistance in the sample. To make sure these results held, I went back to Prolific and sampled more Black and Hispanic respondents to repeat the experiment. The strongest results in these tests came from white respondents.

Why might this be the case? We have long known that attitudes about social welfare programs of all kinds are tied up with race. Research finds these differences because of stereotypical thinking – some people are deeply concerned that others who receive aid need to “deserve” it by working hard and only using aid on necessities, not luxuries. We also know that these beliefs are often linked to racial stereotypes. Previous work on food stamps, disaster relief, guaranteed income, and other social aid programs often finds these social forces at work.

These results show that stereotypical thinking about who “deserves” help may be an important public policy hurdle as we work on adapting to climate change. As policymakers face an increasing need for adequate cooling to address public health issues, they will need to account for the fact that the public may still be thinking of air conditioning as a luxury or comfort good. Making policy to survive climate change requires updating our thinking about the status of goods necessary to weather the crisis.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow his work at his website, or on BlueSky.

(View original at https://thesocietypages.org/socimages)

Cryptogram Subverting LLM Coders

Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“:

Abstract: Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To address this critical security challenge, we introduce CODEBREAKER, a pioneering LLM-assisted backdoor attack framework on code completion models. Unlike recent attacks that embed malicious payloads in detectable or irrelevant sections of the code (e.g., comments), CODEBREAKER leverages LLMs (e.g., GPT-4) for sophisticated payload transformation (without affecting functionalities), ensuring that both the poisoned data for fine-tuning and generated code can evade strong vulnerability detection. CODEBREAKER stands out with its comprehensive coverage of vulnerabilities, making it the first to provide such an extensive set for evaluation. Our extensive experimental evaluations and user studies underline the strong attack performance of CODEBREAKER across various settings, validating its superiority over existing approaches. By integrating malicious payloads directly into the source code with minimal transformation, CODEBREAKER challenges current security measures, underscoring the critical need for more robust defenses for code completion.

Clever attack, and yet another illustration of why trusted AI is essential.

Cryptogram AIs Discovering Vulnerabilities

I’ve been writing about the possibility of AIs automatically discovering code vulnerabilities since at least 2018. This is an ongoing area of research: AIs doing source code scanning, AIs finding zero-days in the wild, and everything in between. The AIs aren’t very good at it yet, but they’re getting better.

Here’s some anecdotal data from this summer:

Since July 2024, ZeroPath is taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing (SAST) tools were ill-equipped to find. This post provides a technical deep-dive into our research methodology and a living summary of the bugs found in popular open-source tools.

Expect lots of developments in this area over the next few years.

This is what I said in a recent interview:

Let’s stick with software. Imagine that we have an AI that finds software vulnerabilities. Yes, the attackers can use those AIs to break into systems. But the defenders can use the same AIs to find software vulnerabilities and then patch them. This capability, once it exists, will probably be built into the standard suite of software development tools. We can imagine a future where all the easily findable vulnerabilities (not all the vulnerabilities; there are lots of theoretical results about that) are removed in software before shipping.

When that day comes, all legacy code would be vulnerable. But all new code would be secure. And, eventually, those software vulnerabilities will be a thing of the past. In my head, some future programmer shakes their head and says, “Remember the early decades of this century when software was full of vulnerabilities? That’s before the AIs found them all. Wow, that was a crazy time.” We’re not there yet. We’re not even remotely there yet. But it’s a reasonable extrapolation.

EDITED TO ADD: And Google’s LLM just discovered an exploitable zero-day.

Cryptogram Good Essay on the History of Bad Password Policies

Stuart Schechter makes some good points on the history of bad password policies:

Morris and Thompson’s work brought much-needed data to highlight a problem that lots of people suspected was bad, but that had not been studied scientifically. Their work was a big step forward, if not for two mistakes that would impede future progress in improving passwords for decades.

First, was Morris and Thompson’s confidence that their solution, a password policy, would fix the underlying problem of weak passwords. They incorrectly assumed that if they prevented the specific categories of weakness that they had noted, that the result would be something strong. After implementing a requirement that password have multiple characters sets or more total characters, they wrote:

These improvements make it exceedingly difficult to find any individual password. The user is warned of the risks and if he cooperates, he is very safe indeed.

As should be obvious now, a user who chooses “p@ssword” to comply with policies such as those proposed by Morris and Thompson is not very safe indeed. Morris and Thompson assumed their intervention would be effective without testing its efficacy, considering its unintended consequences, or even defining a metric of success to test against. Not only did their hunch turn out to be wrong, but their second mistake prevented anyone from proving them wrong.

That second mistake was convincing sysadmins to hash passwords, so there was no way to evaluate how secure anyone’s password actually was. And it wasn’t until hackers started stealing and publishing large troves of actual passwords that we got the data: people are terrible at generating secure passwords, even with rules.

Planet DebianReproducible Builds: Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


Other tributes:

Planet DebianStefano Zacchiroli: In memory of Lunar

In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

365 TomorrowsDown Under

Author: Beck Dacus Each time the floor shuddered, all our chains rang like windchimes. The shackles around my ankles were linked to the wrists of the “inmate” behind me, on and on in a long line of us marching forward. As I stumbled I pulled on that man’s wrists, nearly bringing him down as well. […]

The post Down Under appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Secondary Waits

ArSo works at a small company. It's the kind of place that has one software developer, and ArSo isn't it. But ArSo is curious about programming, and has enough of a technical background that small tasks should be achievable. After some conversations with management, an arrangement was made: Kurt, their developer, would identify a few tasks that were suitable for a beginner, and would then take some time to mentor ArSo through completing them.

It sounded great, especially because Kurt was going to provide sample code which would give ArSo a head start on getting things done. What better way to learn than by watching a professional at work?

DateTime datTmp;

File.Copy(strFileOld, strFileNew);
// 2 seconds delay
datTmp = DateTime.Now;
while (datTmp.Second == DateTime.Now.Second);
datTmp = DateTime.Now;
while (datTmp.Second == DateTime.Now.Second);
File.Delete(strFileOld);

This code copies a file from an old path to a new path, and then deletes the old path after a two second delay. Why is there a delay? I don't know. Why is the delay written like this? I can't possibly explain that.

Check the time at the start of the loop. When the second part of that time stops matching the second part of the current time, we assume one second has passed. This is, of course, inaccurate- if I check the time at 0:00:00.9999 a lot less than a second will pass. This delay is at most one second.

In any case, ArSo has some serious questions about Kurt's mentorship, and writes:

Now I don't know if I should ask for more coding tasks.

Honestly, I think you should ask for more. Like, I think you should just take Kurt's job. You may be a beginner, but honestly, you're likely going to do a better job than this.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

,

Cryptogram New iOS Security Feature Makes It Harder for Police to Unlock Seized Phones

Everybody is reporting about a new security iPhone security feature with iOS 18: if the phone hasn’t been used for a few days, it automatically goes into its “Before First Unlock” state and has to be rebooted.

This is a really good security feature. But various police departments don’t like it, because it makes it harder for them to unlock suspects’ phones.

Planet DebianRussell Coker: Modern Sleep

Julius wrote an insightful blog post about the “modern sleep” issue with Windows [1]. Basically Microsoft decided that the right way to run laptops is to never entirely sleep, which uses more battery but gives better options for waking up and doing things. I agree with Microsoft in concept and this is something that is a problem that can be solved. A phone can run for 24+ hours without ever fully sleeping, a laptop has a more power hungry CPU and peripherals but also has a much larger battery so it should be able to do the same. Some of the reviews for Snapdragon Windows laptops claim up to 22 hours of actual work without charging! So having suspend not really stop the system should be fine.

The ability of a phone to never fully sleep is a change in quality of the usage experience, it means that you can access it and immediately have it respond and it means that all manner of services can be checked for new updates which may require a notification to the user. The XMPP protocol (AKA Jabber) was invented in 1999 which was before laptops were common and Instant Message systems were common long before then. But using Jabber or another IM system on a desktop was a very different experience to using it on a laptop and using it on a phone is different again. The “modern sleep” allows laptops to act like phones in regard to such messaging services. Currently I have Matrix IM clients running on my Android phone and Linux laptop, if I get a notification that takes much typing for a response then I get out my laptop to respond. If I had an ARM based laptop that never fully shut down I would have much less need for Matrix on a phone.

Making “modern sleep” popular will lead to more development of OS software to work with it. For Linux this will hopefully mean that regular Linux distributions (as opposed to Android which while running a Linux kernel is very different to Debian etc) get better support for such things and therefore become more usable on phones. Debian on a Librem 5 or PinePhonePro isn’t very usable due to battery life issues.

A laptop with an LTE card can be used for full mobile phone functionality. With “modern sleep” this is a viable option. I am tempted to make a laptop with LTE card and bluetooth headset a replacement for my phone. Some people will say “what if someone tries to call you when it’s not convenient to have your laptop with you”, my response is “what if people learn to not expect me to answer the phone at any time as they managed that in the 90s”. Seriously SMS or Matrix me if you want an instant response and if you want a long chat schedule it via SMS or Matrix.

Dell has some useful advice about how to use their laptops (and probably most laptops from recent times) in this regard [2]. You can’t close the lid before unplugging the power cable you have to unplug first and then close. You shouldn’t put a laptop in a sealed bag for travel either. This is a terrible situation, you can put a tablet in a bag and don’t need to take any special precautions when unplugging and laptops should work the same. The end result of what Microsoft, Dell, Intel, and others are doing will be good but they are making some silly design choices along the way! I blame Intel mostly for selling laptop CPUs with TDPs >40W!

For an amusing take on this Linus Tech Tips has a video about being forced to use MacBooks by Microsoft’s implementation of Modern Sleep [3].

I’ll try out some ARM laptops in the near future and blog about how well they work on Debian.

365 TomorrowsThe Trail

Author: Mark Renney The changeover hasn’t ever been subtle, but long ago, centuries ago, it wasn’t so difficult, so intense and all consuming. I think it’s fair to say that, back then, I rode roughshod, moving quickly from host to host. I would like to say I selected indiscriminately, but it wouldn’t be true. I […]

The post The Trail appeared first on 365tomorrows.

Worse Than FailureCodeSOD: The First 10,000

Alicia recently inherited a whole suite of home-grown enterprise applications. Like a lot of these kinds of systems, it needs to do batch processing. She went tracking down a mysterious IllegalStateException only to find this query causing the problem:

select * from data_import where id > 10000

The query itself is fine, but the code calling it checks to see if this query returned any rows- if it did, the code throws the IllegalStateException.

First, of course, this should be a COUNT(*) query- no need to actually return rows here. But also… what? Why do we fail if there are any transactions with an ID greater than 10000? Why on Earth would we care?

Well, the next query it runs is this:

update data_import set id=id+10000

Oh. Oh no. Oh nooooo. Are they… are they using the ID to also represent some state information about the status of the record? It sure seems like it!

The program then starts INSERTing data, using a counter which starts at 1. Once all the new data is added, the program then does:

delete from data_import where id > 10000

All this is done within a single method, with no transactions and no error handling. And yes, this is by design. You see, if anything goes wrong during the inserts, then the old records don't get deleted, so we can see that processing failed and correct it. And since the IDs are sequential and always start at 1, we can easily find which row caused the problem. Who needs logging or any sort of exception handling- just check your IDs.

The underlying reason why this started failing was because the inbound data started trying to add more than 10,000 rows, which meant the INSERTs started failing (since we already had rows there for this). Alicia wanted to fix this and clean up the process, but too many things depended on it working in this broken fashion. Instead, her boss implemented a quick and easy fix: they changed "10000" to "100000".

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff - November 2024

Our Debian User Group met on November 2nd after a somewhat longer summer hiatus than normal. It was lovely to see a bunch of people again and to be able to dedicate a whole day to hacking :)

Here is what we did:

lavamind:

  • reproduced puppetdb FTBFS #1084038 and reported the issue upstream
  • uploaded a new upstream version for pgpainless (1.6.8-1)
  • uploaded a new revision for ruby-moneta (1.6.0-3)
  • sent an inquiry to the backports team about #1081696

pollo:

  • reviewed & merged many lintian merge requests, clearing out most of the queue
  • uploaded a new lintian release (1.120.0)
  • worked on unblocking the revival of lintian.debian.org (many thanks to anarcat and pkern)
  • apparently (kindly) told people to rtfm at least 4 times :)

anarcat:

LeLutin:

  • opened an RFS on the ruby team mailing list for the new upstream version of ruby-necromancer
  • worked on packaging the new upstream version of ruby-pathspec

tvaz:

  • did AM (Application Manager) work

tassia:

  • explored the Debian Jr. project (website, wiki, mailing list, salsa repositories)
  • played a few games for Nico's entertainment :-)
  • built and tested a Debian Jr. live image

Pictures

This time around, we went back to Foulab. Thanks for hosting us!

As always, the hacklab was full of interesting stuff and I took a few (bad) pictures for this blog post:

Two old video cameras and a 'My First Sony' tape recorder An ALP HT-286 machine with a very large 'turbo' button A New Hampshire 'IPROUTE' vanity license plate

Krebs on SecurityMicrosoft Patch Tuesday, November 2024 Edition

Microsoft today released updates to plug at least 89 security holes in its Windows operating systems and other software. November’s patch batch includes fixes for two zero-day vulnerabilities that are already being exploited by attackers, as well as two other flaws that were publicly disclosed prior to today.

The zero-day flaw tracked as CVE-2024-49039 is a bug in the Windows Task Scheduler that allows an attacker to increase their privileges on a Windows machine. Microsoft credits Google’s Threat Analysis Group with reporting the flaw.

The second bug fixed this month that is already seeing in-the-wild exploitation is CVE-2024-43451, a spoofing flaw that could reveal Net-NTLMv2 hashes, which are used for authentication in Windows environments.

Satnam Narang, senior staff research engineer at Tenable, says the danger with stolen NTLM hashes is that they enable so-called “pass-the-hash” attacks, which let an attacker masquerade as a legitimate user without ever having to log in or know the user’s password. Narang notes that CVE-2024-43451 is the third NTLM zero-day so far this year.

“Attackers continue to be adamant about discovering and exploiting zero-day vulnerabilities that can disclose NTLMv2 hashes, as they can be used to authenticate to systems and potentially move laterally within a network to access other systems,” Narang said.

The two other publicly disclosed weaknesses Microsoft patched this month are CVE-2024-49019, an elevation of privilege flaw in Active Directory Certificate Services (AD CS); and CVE-2024-49040, a spoofing vulnerability in Microsoft Exchange Server.

Ben McCarthy, lead cybersecurity engineer at Immersive Labs, called special attention to CVE-2024-43639, a remote code execution vulnerability in Windows Kerberos, the authentication protocol that is heavily used in Windows domain networks.

“This is one of the most threatening CVEs from this patch release,” McCarthy said. “Windows domains are used in the majority of enterprise networks, and by taking advantage of a cryptographic protocol vulnerability, an attacker can perform privileged acts on a remote machine within the network, potentially giving them eventual access to the domain controller, which is the goal for many attackers when attacking a domain.”

McCarthy also pointed to CVE-2024-43498, a remote code execution flaw in .NET and Visual Studio that could be used to install malware. This bug has earned a CVSS severity rating of 9.8 (10 is the worst).

Finally, at least 29 of the updates released today tackle memory-related security issues involving SQL server, each of which earned a threat score of 8.8. Any one of these bugs could be used to install malware if an authenticated user connects to a malicious or hacked SQL database server.

For a more detailed breakdown of today’s patches from Microsoft, check out the SANS Internet Storm Center’s list. For administrators in charge of managing larger Windows environments, it pays to keep an eye on Askwoody.com, which frequently points out when specific Microsoft updates are creating problems for a number of users.

As always, if you experience any problems applying any of these updates, consider dropping a note about it in the comments; chances are excellent that someone else reading here has experienced the same issue, and maybe even has found a solution.

Planet DebianPaul Tagliamonte: Complex for Whom?

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else – maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

Cryptogram Mapping License Plate Scanners in the US

DeFlock is a crowd-sourced project to map license plate scanners.

It only records the fixed scanners, of course. The mobile scanners on cars are not mapped.

Planet DebianSven Hoexter: fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

Planet DebianJames Bromberger: My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

Worse Than FailureRepresentative Line: How is an Array like a Banana?

Some time ago, poor Keith found himself working on an antique Classic ASP codebase. Classic ASP uses VBScript, which is like VisualBasic 6.0, but worse in most ways. That's not to say that VBScript code is automatically bad, but the language certainly doesn't help you write clean code.

In any case, the previous developer needed to make an 8 element array to store some data. Traditionally, in VBScript, you might declare it like so:

Dim params(8)

That's the easy, obvious way a normal developer might do it.

Keith's co-worker did this instead:

Dim params : params = Split(",,,,,,,", ",")

Yes, this creates an array using the Split function on a string of only commas. 7, to be exact. Which, when split, creates 8 empty substrings.

We make fun of stringly typed data a lot here, but this is an entirely new level of stringly typed initialization.

We can only hope that this code has finally been retired, but given that it was still in use well past the end-of-life for Classic ASP, it may continue to lurk out there, waiting for another hapless developer to stumble into its grasp.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsCrossover

Author: Majoki Most folks can pretty easily picture an amount doubling, and even envisioning something ten or a hundred times its current size or intensity. But our imaginations often fail miserably when faced with exponential growth. Unfortunately, this inability (or unwillingness) to comprehend (or confront) rapid proportional change threatens our long-term viability as a species. […]

The post Crossover appeared first on 365tomorrows.

Cryptogram Criminals Exploiting FBI Emergency Data Requests

I’ve been writing about the problem with lawful-access backdoors in encryption for decades now: that as soon as you create a mechanism for law enforcement to bypass encryption, the bad guys will use it too.

Turns out the same thing is true for non-technical backdoors:

The advisory said that the cybercriminals were successful in masquerading as law enforcement by using compromised police accounts to send emails to companies requesting user data. In some cases, the requests cited false threats, like claims of human trafficking and, in one case, that an individual would “suffer greatly or die” unless the company in question returns the requested information.

The FBI said the compromised access to law enforcement accounts allowed the hackers to generate legitimate-looking subpoenas that resulted in companies turning over usernames, emails, phone numbers, and other private information about their users.

LongNowSara Imari Walker

Sara Imari Walker

Sara Imari Walker leads one of the largest international theory groups in origins of life and astrobiology. Walker and her team's key areas of research are in developing new approaches to the problem of understanding universal features of life; those that might allow a general theory for solving the matter to life transition, detecting alien life and designing synthetic life. Applying assembly theory, a physics framework based on molecular complexity that Walker and her team have expanded, opens a new path to identify where the threshold lies where life arises from non-life, and to detect and understand the evolution of life on our planet and in the universe.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, October 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In October, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 15.0h (out of 87.0h assigned and 13.0h from previous period), thus carrying over 85.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 10.0h assigned).
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 4.0h (out of 0.0h assigned and 4.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 29.0h (out of 26.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 60.0h (out of 23.5h assigned and 36.5h from previous period).
  • Guilhem Moulin did 7.5h (out of 19.75h assigned and 0.25h from previous period), thus carrying over 12.5h to the next month.
  • Lee Garrett did 15.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 44.75h to the next month.
  • Lucas Kanashiro did 10.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. Sánchez did 9.75h (out of 24.0h assigned), thus carrying over 14.25h to the next month.
  • Santiago Ruano Rincón did 23.5h (out of 25.0h assigned), thus carrying over 1.5h to the next month.
  • Sean Whitton did 6.25h (out of 1.0h assigned and 5.25h from previous period).
  • Stefano Rivera did 1.0h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.0h to the next month.
  • Sylvain Beucler did 9.5h (out of 16.0h assigned and 44.0h from previous period), thus carrying over 50.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 10.5h (out of 12.0h assigned), thus carrying over 1.5h to the next month.

Evolution of the situation

In October, we have released 35 DLAs.

Some notable updates prepared in October include denial of service vulnerability fixes in nss, regression fixes in apache2, multiple fixes in php7.4, and new upstream releases of firefox-esr, openjdk-17, and opendk-11.

Additional contributions were made for the stable Debian 12 bookworm release by several LTS contributors. Arturo Borrero Gonzalez prepared a parallel update of nss, Bastien Roucariès prepared a parallel update of apache2, and Santiago Ruano Rincón prepared updates of activemq for both LTS and Debian stable.

LTS contributor Bastien Roucariès undertook a code audit of the cacti package and in the process discovered three new issues in node-dompurity, which were reported upstream and resulted in the assignment of three new CVEs.

As always, the LTS team continues to work towards improving the overall sustainability of the free software base upon which Debian LTS is built. We thank our many committed sponsors for their ongoing support.

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianGunnar Wolf: Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

Planet DebianVincent Bernat: Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

Update (2024-11)

This flake won’t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different and I have proposed it in another pull request↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

Worse Than FailureCodeSOD: Pay for this Later

Ross needed to write software to integrate with a credit card payment gateway. The one his company chose was relatively small, and only served a handful of countries- but it covered the markets they cared about and the transaction fees were cheap. They used XML for data interchange, and while they had no published schema document, they did have some handy-dandy sample code which let you parse their XML messages.

$response = curl_exec($ch);
$authecode = fetch_data($response, '<authCode>', '</authCode>');
$responsecode = fetch_data($response, '<responsecode>', '</responsecode>');
$retrunamount = fetch_data($response, '<returnamount>', '</returnamount>');
$trxnnumber = fetch_data($response, '<trxnnumber>', '</trxnnumber>');
$trxnstatus = fetch_data($response, '<trxnstatus>', '</trxnstatus>');
$trxnresponsemessage = fetch_data($response, '<trxnresponsemessage>', '</trxnresponsemessage>');

Well, this looks… worrying. At first glance, I wonder if we're going to have to kneel before Z̸̭͖͔͂̀ā̸̡͖͕͊l̴̜͕͋͌̕g̸͉̳͂͊ȯ̷͙͂̐. What exactly does fetch_data actually do?

function fetch_data($string, $start_tag, $end_tag)
{

  $position = stripos($string, $start_tag);
  $str = substr($string, $position);
  $str_second = substr($str, strlen($start_tag));
  $second_positon = stripos($str_second, $end_tag);
  $str_third = substr($str_second, 0, $second_positon);
  $fetch_data = trim($str_third);
  return $fetch_data;
}

Phew, no regular expressions, just… lots of substrings. This parses the XML document with no sense of the document's structure- it literally just searches for specific tags, grabs whatever is between them, and calls it done. Nested tags? Attributes? Self-closing tags? Forget about it. Since it doesn't enforce that your open and closing tags match, it also lets you grab arbitrary (and invalid) document fragments- fetch_data($response, "<fooTag>", "<barTag>"), for example.

And it's not like this needs to be implemented from scratch- PHP has built-in XML parsing classes. We could argue that by limiting ourselves to a subset of XML (which I can only hope this document does) and doing basic string parsing, we've built a much simpler approach, but I suspect that after doing a big pile of linear searches through the document, we're not really going to see any performance benefits from this version- and maintenance is going to be a nightmare, as it's so fragile and won't work for many very valid XML documents.

It's always amazing when TRWTF is neither PHP nor XML but… whatever this is.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

David BrinJoe & Mark do these now! My own post-mortem can wait.

Here I offer two time-critical suggestions, below.

So skip past my blowhard prelude!


Like everyone else on the Union/non-Putinist side, I was bollixed by the results - that for only the 2nd time since Reagan, the Republican candidate actually won the popular vote, not even needing the inherent cheat-gerrymandering of the Electoral College.

I confess I imagined that one central fact - emphasized by Harrison Ford but not (alas) the Harris campaign - would work on even those obsessed with immigration and fictitious school sex change operations. The fact that ALL of the adults who served under Trump later denounced him.*

Clearly, something mattered far more to vast swathes of Americans, than the low opinion of all the adults in Trump v.1.0 toward their jibbering boss. And no, it's WAS NOT racism/misogyny. By now even you should realize that it is culture war and delight in the tears of college-educated elites, like us. Like those 250+ adults from Trump v1.0.

Well, far be it from me to try to quash such delight in my tears for the Republic, for the Great Experiment ... and for Ukraine. Here they are guys. Drink up. But save some of the tears to bottle and send to Vlad.


                     * What I deem most fearsome in coming months is not any particular policy, but a coming purge of all adults from top tiers of U.S. government.


Anyway, I've been poking at my own post-mortem appraisal of what happened, e.g. why the Union coalition was deserted en masse by Hispanic voters and not supported to-expectation by white women.  I'll soon get to that posting, or several. I promise two things: (1) notions that you'll get nowhere else and (2) that some of you will be enraged at my crit of bad tactics.

But that can wait. Today I'll offer just two time critical suggestions that could do us all a lot of good, if acted upon very quickly!  

They won't be, of course. Still, maybe some AIs somewhere/sometime will note that I offered these. And maybe they will model "that coulda worked."

It's likely the best I can hope for. And yet... here goes...


== Joe, at long last and right now 

offer the clemency for truth deal! ==


Item #1: I've long asked for it. But now would be perfect. 

Joe Biden could offer amnesty/clemency and even pardons, in exchange for revelations vital to the Republic.  


"If you are a person of influence in the USA, and you've been under the thumb of foreign or domestic blackmailers, this is your one chance. **

"Step up and tell all! I promise I'll do everything in my power to lessen your legal penalties, in exchange for truths that could greatly benefit your country. Perhaps even shattering a cabal whose tentacles - some claim - have pervaded this town and the nation.

"I can't prevent pain and public disdain over whatever originally got you into your mess, or things done to please your blackmailers. But I can promise three things: some legal safety, plus privately-funded and bonded security, if requested...

"...plus also public praise for being among the first to step up and show some guts! For the sake of the nation... and your own redemption."


Sure, this would raise howls! Even though there's precedent in Nelson Mandela's Truth & Reconciliation process and similar programs in Argentina and Chile.

 Moreover, several Congress members have already attested publicly that such blackmail rings exist, pervading their own party!


"Why haven't I done this sooner? Because inevitably, it'd be seen as a political stunt. In our tense era, I'd be accused of election meddling.  Only now, admitting that the nation has decisively chosen Donald Trump and his party to govern, I can do this outside of politics, in order to give him a truly clean slate! 

"Let him - let us all - start fresh in January, knowing that the nation had this one chance to flush out the worst illness... aided by those who are eager to breathe free of threats and guilt, at long last....

"... remembering that all 'Heaven rejoices when a sinner repents and comes to the light.'"


Whatever your beliefs, I defy you to name any drawbacks. And let's be clear. Joe could do this. He could do it tomorrow. And the worst thing that he risks would be that nothing happens.

Even in that case, amid some mockery, he would still have raised a vitally needed topic. And at-best?

At best, there could be a whole lot of disinfection. At a time when it is exactly what's badly needed.


== What some billionaire could do ==

Another proposal I have made before, in Polemical Judo. This one seems worth doing, even in the present case, when Donald Trump has 26 more electoral votes than he needs - and hence has nothing to fear from defections, before the "Electoral College" votes, next month.

Why did I say "Electoral College" in quotes? Because in fact it has never, ever been a 'college' of elected delegates, originally meant to deliberate solemnly and choose the president of the United States after thorough discussion.  But that might change!

As I've said before - one zillionaire could change this.  Rent a mountaintop luxury hotel. Hire retired Secret Service agents for security and some highly-vetted chefs-de-cuisine. Maybe a string quartet of non-English speakers. For two weeks, the only others who may walk through the doors and stroll the grounds are registered electors.

They can come - or not - if they want. Dine and stroll and no one has any obligation to speak or listen. Or else - completely up to them - they might decide to convene the first actual Electoral College in the history of the Republic. Is there any -- and I mean any -- reason why this would not be legally and morally completely kosher?

Yes, I know. It will guarantee that the following election will see the parties vett their elector slates even more carefully for utter-utter loyalty. As if that isn't already true. 

So? In any case, the cost would be chickenfeed and the sheer coolness factor could be... well... diverting from our troubles.


== Other suggestions? ==

You know I got a million of em. And (alas!) so many were already in Polemical Judo

And already ignored. Because  the ideas are unconventional and cross many party clichés. Whatever. Poor Brin.

But these are two that will either be acted upon NEXT WEEK or else (99.999% odds) not at all.

So, next posting I'll dive into that post-mortem of the election. And yes, there will be perspectives you never heard or saw anywhere else. (Care to bet on that?) And some may make you go huh. And some may make you angry.

Good. Like Captain Kirk... you need your pain.


=====

=====


** I made my case about blackmail years ago here: Political Blackmail: The Hidden Danger to Public Servants.  And despite Madison Cawthorn and several other high Republicans testifying openly that it is utterly true - honey pots and 'orgies' and sophisticated KGB lompromat - apparently nothing has been done. Nor - apparently - will it be.

Still, there is a third thing I was gonna recommend here...

...that Biden promise sanctuary and a big financial prize for any KGB officer who defects, bringing over the blackmail files! Just making the offer, publicly, might make many people on this planet very, very nervous... and likely result in some orchestrated performances of group window diving in Moscow.

Well-well. One can fantacize it as a coming episode of the Mission Impossible series, at least. Call my agent


*** Several of you spoke of the threat to personal physical safety for the first few to step forward... until the wave of revelation turns the tables and sends blackmailers fleeing for their lives. While it's true that Joe B will no longer be in a position to offer US Government guarantees, allied governments can! Plus new identities etc. Anyway, isn't this fundamentally about heroism? Asking it - in exchange for redemption - from those who might leap at a chance for a path out of treason-hell?





365 TomorrowsThe End

Author: Julian Miles, Staff Writer Sources always emphasised the utility of wind-up devices after any sort of catastrophe. I used to be sceptical, but having now spent a couple of years surviving in the ruined urban wonderlands of southern England, I admit I was mostly wrong. When I hooked up with this group last year, […]

The post The End appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianReproducible Builds: Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

365 TomorrowsSynthetic Predicament

Author: M D Smith IV The synthetics of New World Robotics had reached a level of perfection so far past the clunky years that they cost the average middle-class family the equivalent of six years’ salary. Those who could afford one bought them on time, like a house after a down payment, and touted them […]

The post Synthetic Predicament appeared first on 365tomorrows.

Planet DebianThorsten Alteholz: My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

,

Planet DebianJonathan Dowland: Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

Krebs on SecurityFBI: Spike in Hacked Police Emails, Fake Subpoenas

The Federal Bureau of Investigation (FBI) is urging police departments and governments worldwide to beef up security around their email systems, citing a recent increase in cybercriminal services that use hacked police email accounts to send unauthorized subpoenas and customer data requests to U.S.-based technology companies.

In an alert (PDF) published this week, the FBI said it has seen an uptick in postings on criminal forums regarding the process of emergency data requests (EDRs) and the sale of email credentials stolen from police departments and government agencies.

“Cybercriminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes,” the FBI warned.

In the United States, when federal, state or local law enforcement agencies wish to obtain information about an account at a technology provider — such as the account’s email address, or what Internet addresses a specific cell phone account has used in the past — they must submit an official court-ordered warrant or subpoena.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted (eventually, and at least in part) as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

In some cases, a cybercriminal will offer to forge a court-approved subpoena and send that through a hacked police or government email account. But increasingly, thieves are relying on fake EDRs, which allow investigators to attest that people will be bodily harmed or killed unless a request for account data is granted expeditiously.

The trouble is, these EDRs largely bypass any official review and do not require the requester to supply any court-approved documents. Also, it is difficult for a company that receives one of these EDRs to immediately determine whether it is legitimate.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR — and potentially having someone’s blood on their hands — or possibly leaking a customer record to the wrong person.

Perhaps unsurprisingly, compliance with such requests tends to be extremely high. For example, in its most recent transparency report (PDF) Verizon said it received more than 127,000 law enforcement demands for customer data in the second half of 2023 — including more than 36,000 EDRs — and that the company provided records in response to approximately 90 percent of requests.

One English-speaking cybercriminal who goes by the nicknames “Pwnstar” and “Pwnipotent” has been selling fake EDR services on both Russian-language and English cybercrime forums. Their prices range from $1,000 to $3,000 per successful request, and they claim to control “gov emails from over 25 countries,” including Argentina, Bangladesh, Brazil, Bolivia, Dominican Republic, Hungary, India, Kenya, Jordan, Lebanon, Laos, Malaysia, Mexico, Morocco, Nigeria, Oman, Pakistan, Panama, Paraguay, Peru, Philippines, Tunisia, Turkey, United Arab Emirates (UAE), and Vietnam.

“I cannot 100% guarantee every order will go through,” Pwnstar explained. “This is social engineering at the highest level and there will be failed attempts at times. Don’t be discouraged. You can use escrow and I give full refund back if EDR doesn’t go through and you don’t receive your information.”

An ad from Pwnstar for fake EDR services.

A review of EDR vendors across many cybercrime forums shows that some fake EDR vendors sell the ability to send phony police requests to specific social media platforms, including forged court-approved documents. Others simply sell access to hacked government or police email accounts, and leave it up to the buyer to forge any needed documents.

“When you get account, it’s yours, your account, your liability,” reads an ad in October on BreachForums. “Unlimited Emergency Data Requests. Once Paid, the Logins are completely Yours. Reset as you please. You would need to Forge Documents to Successfully Emergency Data Request.”

Still other fake EDR service vendors claim to sell hacked or fraudulently created accounts on Kodex, a startup that aims to help tech companies do a better job screening out phony law enforcement data requests. Kodex is trying to tackle the problem of fake EDRs by working directly with the data providers to pool information about police or government officials submitting these requests, with an eye toward making it easier for everyone to spot an unauthorized EDR.

If police or government officials wish to request records regarding Coinbase customers, for example, they must first register an account on Kodexglobal.com. Kodex’s systems then assign that requestor a score or credit rating, wherein officials who have a long history of sending valid legal requests will have a higher rating than someone sending an EDR for the first time.

It is not uncommon to see fake EDR vendors claim the ability to send data requests through Kodex, with some even sharing redacted screenshots of police accounts at Kodex.

Matt Donahue is the former FBI agent who founded Kodex in 2021. Donahue said just because someone can use a legitimate police department or government email to create a Kodex account doesn’t mean that user will be able to send anything. Donahue said even if one customer gets a fake request, Kodex is able to prevent the same thing from happening to another.

Kodex told KrebsOnSecurity that over the past 12 months it has processed a total of 1,597 EDRs, and that 485 of those requests (~30 percent) failed a second-level verification. Kodex reports it has suspended nearly 4,000 law enforcement users in the past year, including:

-1,521 from the Asia-Pacific region;
-1,290 requests from Europe, the Middle East and Asia;
-460 from police departments and agencies in the United States;
-385 from entities in Latin America, and;
-285 from Brazil.

Donahue said 60 technology companies are now routing all law enforcement data requests through Kodex, including an increasing number of financial institutions and cryptocurrency platforms. He said one concern shared by recent prospective customers is that crooks are seeking to use phony law enforcement requests to freeze and in some cases seize funds in specific accounts.

“What’s being conflated [with EDRs] is anything that doesn’t involve a formal judge’s signature or legal process,” Donahue said. “That can include control over data, like an account freeze or preservation request.”

In a hypothetical example, a scammer uses a hacked government email account to request that a service provider place a hold on a specific bank or crypto account that is allegedly subject to a garnishment order, or party to crime that is globally sanctioned, such as terrorist financing or child exploitation.

A few days or weeks later, the same impersonator returns with a request to seize funds in the account, or to divert the funds to a custodial wallet supposedly controlled by government investigators.

“In terms of overall social engineering attacks, the more you have a relationship with someone the more they’re going to trust you,” Donahue said. “If you send them a freeze order, that’s a way to establish trust, because [the first time] they’re not asking for information. They’re just saying, ‘Hey can you do me a favor?’ And that makes the [recipient] feel valued.”

Echoing the FBI’s warning, Donahue said far too many police departments in the United States and other countries have poor account security hygiene, and often do not enforce basic account security precautions — such as requiring phishing-resistant multifactor authentication.

How are cybercriminals typically gaining access to police and government email accounts? Donahue said it’s still mostly email-based phishing, and credentials that are stolen by opportunistic malware infections and sold on the dark web. But as bad as things are internationally, he said, many law enforcement entities in the United States still have much room for improvement in account security.

“Unfortunately, a lot of this is phishing or malware campaigns,” Donahue said. “A lot of global police agencies don’t have stringent cybersecurity hygiene, but even U.S. dot-gov emails get hacked. Over the last nine months, I’ve reached out to CISA (the Cybersecurity and Infrastructure Security Agency) over a dozen times about .gov email addresses that were compromised and that CISA was unaware of.”

365 TomorrowsReality Check

Author: Melissa Kobrin Claire looked nervously at the coffin-shaped vat of green goo in front of her and tried to remember that this was one of the best days of her life. Her bachelorette party was going to be beyond her wildest dreams. And fantasies. The only reason Caleb was okay with it was because […]

The post Reality Check appeared first on 365tomorrows.

,

Planet DebianThomas Lange: Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

Worse Than FailureError'd: Relatively Speaking

Amateur physicist B.J. is going on vacation, but he likes to plan things right down to the zeptosecond. "Assume the flight accelerates at a constant speed for the first half of the flight, and decelerates at the same rate for the second half. 1) What speed does the plane need to reach to have that level of time dilation? 2) What is the distance between the airports?"

1

 

Contrarily, Eddie R. was tired of vacation so got a new job, but right away he's having second thoughts. "Doing my onboarding, but they seem to have trouble with the idea of optional."

0

 

"Forget UTF-8! Have you heard about the new, hot encoding standard for 2024?!" exclaimed Daniel , kvetching "Well, if you haven't then Gravity Forms co. is going to change your mind: URLEncode everything now! Specially if you need to display some diacritics on your website. Throw away the old, forgotten UTF-8. Be a cool guy, just use that urlencode!"

3

 

Immediately afterward, Daniel also sent us another good example, this time from Hetzner. He complains "Hetzner says the value is invalid. Of course they won't say what is or isn't allowed. It wasn't the slash character, it was... a character with diacritics! Hetzner is clearly using US-ASCII created in 1960's."

2

 

Finally this week, we pulled something out of the archive from Boule de Berlin who wrote "Telekom, the biggest German ISP, shows email address validation is hard. They use a regex that limits the TLD part of an email address to 4 chars." Old but timeless.

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianFreexian Collaborators: Debian Contributions: October’s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne

After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies.

At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there.

The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions

  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian’s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn’t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rincón continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Raphaël forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

Planet DebianReproducible Builds (diffoscope): diffoscope 283 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 283. This version includes the following changes:

[ Martin Abente Lahaye ]
* Fix crash when objdump is missing when checking .EFI files.

You find out more by visiting the project homepage.

,

Cryptogram AI Industry is Trying to Subvert the Definition of “Open Source AI”

The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.

And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one rebuttal to the definition.)

This is worth fighting for. We need a public AI option, and open source—real open source—is a necessary component of that.

But while open source should mean open source, there are some partially open models that need some sort of definition. There is a big research field of privacy-preserving, federated methods of ML model training and I think that is a good thing. And OSI has a point here:

Why do you allow the exclusion of some training data?

Because we want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information ­ like decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

How about we call this “open weights” and not open source?

Cryptogram Friday Squid Blogging: Squid-A-Rama in Des Moines

Squid-A-Rama will be in Des Moines at the end of the month.

Visitors will be able to dissect squid, explore fascinating facts about the species, and witness a live squid release conducted by local divers.

How are they doing a live squid release? Simple: this is Des Moines, Washington; not Des Moines, Iowa.

Blog moderation policy.

Cryptogram Prompt Injection Defenses Against LLM Cyberattacks

Interesting research: “Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks“:

Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense strategy tailored to counter LLM-driven cyberattacks. We introduce Mantis, a defensive framework that exploits LLMs’ susceptibility to adversarial inputs to undermine malicious operations. Upon detecting an automated cyberattack, Mantis plants carefully crafted inputs into system responses, leading the attacker’s LLM to disrupt their own operations (passive defense) or even compromise the attacker’s machine (active defense). By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker’s LLM, Mantis can autonomously hack back the attacker. In our experiments, Mantis consistently achieved over 95% effectiveness against automated LLM-driven attacks. To foster further research and collaboration, Mantis is available as an open-source tool: this https URL.

This isn’t the solution, of course. But this sort of thing could be part of a solution.

LongNowHow Our Economic Stories Shape the World

💡
The piece was included in the 02024 edition of Pace Layers, Long Now's Annual Journal of the best of long-term thinking.

LEARN MORE
How Our Economic Stories Shape the World

What is the economy? It’s actually quite a difficult question. One thing for certain, though, is that the economy is not a neutral thing. The language we use to describe it could lead us to believe that it’s a machine, one we can kick start with enough stimulus money from nation-states or central banks. Or that it exists in the S&P 500, the global commodities market, or the amount of money circulating in the economy.

But the economy is better thought of as an emergent phenomenon based on our adopted stories and the values they contain, and how these values become formalized through technology, law, trade, and regulation.

One of the places where the values of an economic era manifest themselves is in the skyline of our cities. Take London, for example, where I lived for a few years. The skyline of London used to be dominated by religious buildings because the church represented one of the most powerful economic institutions of the time. Later, as the role of the state grew as provisioner of public services and security, buildings like the Elizabeth Tower (Big Ben) and large judicial and legislative buildings emerged with grandeur. And now of course the skyline of London is dominated by banks, financial institutions, and luxury condos — which is also the case with many large cities globally today.

These stories, these paradigms, and notions of who we are as a species, quite literally scaffold the world around us. They embody and serve as the great monuments to our economic paradigms and cultural meta-memes of different eras. These stories become embodied in the physical substrate of our world — they become institutionalized: through technologies, administrative processes and bureaucracy, case law, regulatory and policy documents and approaches. They, then, ossify through feedback loops inherent to our systems.

Institutions with narrative hegemony — like the church, or the state, or financial markets — can orient the entire legal and financial apparatus around their entrenchment. They impose their values through encoding the law around their own protection. 

As Katharina Pistor points out in her groundbreaking book The Code of Capital: How the Law Creates Wealth and Inequality, what we call “capital” is coded from a few legal modules that have long existed: contract law, property rights, collateral law, trust, corporate and bankruptcy law. 

Two predominant legal systems — English common law and New York State laws — dominate global capital laws. London and New York house all of the top 100 law firms, as well as many of the largest global financial institutions. 

She says: “This is where most capital is coded today, especially financial capital, the intangible capital that exists only in law. The historical precedent for global rule by one or several powers is empire. Law’s empire has less need for troops; it relies instead on the normative authority of the law, and its most powerful battle cry is ‘but it is legal.’”

What is normative authority in law? Some version of collective agreement about what is morally or socially acceptable. We first create social and moral judgments about how to ascribe value in the economy, and we codify it in law — we allocate the law around the protection of those judgments.

For example, stock market “value” is a measure of expectations of future performance. And what are expectations? They are collective agreements containing baseline assumptions about the future, and its projections. 

They are stories.

A question to ask ourselves: What values do we want to manifest themselves in the skylines of our great cities in the future?

This transcript of Denise Hearn’s Long Now Talk has been lightly edited for clarity. 

Planet DebianJonathan Dowland: John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

Worse Than FailureRepresentative Line: One More Parameter, Bro

Matt needed to add a new field to a form. This simple task was made complicated by the method used to save changes back to the database. Let's see if you can spot what the challenge was:

public int saveQualif(String docClass, String transcomId, String cptyCod, String tradeId, String originalDealId, String codeEvent, String multiDeal,
            String foNumber, String codeInstrfamily, String terminationDate, String premiumAmount, String premiumCurrency, String notionalAmount,
            String codeCurrency, String notionalAmount2, String codeCurrency2, String fixedRate, String payout, String maType, String maDate,
            String isdaZoneCode, String tradeDate, String externalReference, String entityCode, String investigationFileReference,
            String investigationFileStartDate, String productType, String effectiveDate, String expiryDate, String paymentDate, String settInstrucTyp,
            String opDirection, String pdfPassword, String extlSysCod, String extlDeaId, String agrDt) throws TechnicalException, DfException

That's 36 parameters right there. This function, internally, creates a data access object which takes just as many parameters in its constructor, and then does a check: if a field is non-null, it updates that field in the database, otherwise it doesn't.

Of course, every single one of those parameters is stringly typed, which makes it super fun. Tracking premiumAmount and terminationDate as strings is certainly never going to lead to problems. I especially like the pdfPassword being stored, which is clearly just the low-security password meant to be used for encrypting a transaction statement or similar: "the last 4 digits of your SSN" or whatever. So I guess it's okay that it's being stored in the clear in the database, but also I still hate it. Do better!

In any case, this function was called twice. Once from the form that Matt was editing, where every parameter was filled in. The second time, it was called like this:

int nbUpdates = incoming.saveQualif(docClass, null, null, null, null, null, multiDeal, null,
                null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null,
                null, null, null, null, null, null, null, null, null, null, null, null);

As tempted as Matt was to fix this method and break it up into multiple calls or change the parameters to a set of classes or anything better, he was too concerned about breaking something and spending a lot of time on something which was meant to be a small, fast task. So like everyone who'd come before him, he just slapped in another parameter, tested it, and called it a day.

Refactoring is a problem for tomorrow's developer.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsI Need A New Human

Author: Lynne M Curry I’m your Chatbot Partner. Do you know I exist? Don’t get upset—I know you’re not oblivious. But you never say anything. Not once, not even a passing, “Oh, I hadn’t thought of that” or “thanks.” Do you know I picked you? Maybe you think you just clicked “open AI Chatbot” and, […]

The post I Need A New Human appeared first on 365tomorrows.

,

Rondam RamblingsThe Bright Side of the Election Results

I'm writing this at 9AM Pacific standard time on November 6, the morning after the election.  Not all the dust has quite settled yet, but two things are clear: Donald Trump has won, and the Republicans have taken control of the Senate.  The House is still a toss-up, and it's still unclear whether Trump will win the popular vote, but the last time I looked at the numbers he had a pretty

365 TomorrowsThe Fall of Man

Author: Alastair Millar Prosperina Station’s marketing slogan, “No sun means more fun!”, didn’t do it justice: circling the wandering gas giant PSO J318.5-22, better known as Dis, it was the ultimate in literally non-stop nightlife, seasoned with a flexible approach to Terran laws. Newly graduated robot designer Max Wayne knew she was a decade or […]

The post The Fall of Man appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Uniquely Validated

There's the potential for endless installments of "programmers not understanding how UUIDs work." Frankly, I think the fact that we represent them as human readable strings is part of the problem; sure, it's readable, but conceals the fact that it's just a large integer.

Which brings us to this snippet, from Capybara James.

    if (!StringUtils.hasLength(uuid) || uuid.length() != 36) {
        throw new RequestParameterNotFoundException(ErrorCodeCostants.UUID_MANDATORY_OR_FORMAT);
    }

StringUtils.hasLength comes from the Spring library, and it's a simple "is not null or empty" check. So- we're testing to see if a string is null or empty, or isn't exactly 36 characters long. That tells us the input is bad, so we throw a RequestParameterNotFoundException, along with an error code.

So, as already pointed out, a UUID is just a large integer that we render as a 36 character string, and there are better ways to validate a UUID. But this also will accept any 36 character string- as long as you've got 36 characters, we'll call it a UUID. "This is valid, really valid, dumbass" is now a valid UUID.

With that in mind, I also like the bonus of it not distinguishing between whether or not the input was missing or invalid, because that'll make it real easy for users to understand why their input is getting rejected.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Cryptogram IoT Devices in Password-Spraying Botnet

Microsoft is warning Azure cloud users that a Chinese controlled botnet is engaging in “highly evasive” password spraying. Not sure about the “highly evasive” part; the techniques seem basically what you get in a distributed password-guessing attack:

“Any threat actor using the CovertNetwork-1658 infrastructure could conduct password spraying campaigns at a larger scale and greatly increase the likelihood of successful credential compromise and initial access to multiple organizations in a short amount of time,” Microsoft officials wrote. “This scale, combined with quick operational turnover of compromised credentials between CovertNetwork-1658 and Chinese threat actors, allows for the potential of account compromises across multiple sectors and geographic regions.”

Some of the characteristics that make detection difficult are:

  • The use of compromised SOHO IP addresses
  • The use of a rotating set of IP addresses at any given time. The threat actors had thousands of available IP addresses at their disposal. The average uptime for a CovertNetwork-1658 node is approximately 90 days.
  • The low-volume password spray process; for example, monitoring for multiple failed sign-in attempts from one IP address or to one account will not detect this activity.

,

Krebs on SecurityCanadian Man Arrested in Snowflake Data Extortions

A 25-year-old man in Ontario, Canada has been arrested for allegedly stealing data from and extorting more than 160 companies that used the cloud data service Snowflake.

Image: https://www.pomerium.com/blog/the-real-lessons-from-the-snowflake-breach

On October 30, Canadian authorities arrested Alexander Moucka, a.k.a. Connor Riley Moucka of Kitchener, Ontario, on a provisional arrest warrant from the United States. Bloomberg first reported Moucka’s alleged ties to the Snowflake hacks on Monday.

At the end of 2023, malicious hackers learned that many large companies had uploaded huge volumes of sensitive customer data to Snowflake accounts that were protected with little more than a username and password (no multi-factor authentication required). After scouring darknet markets for stolen Snowflake account credentials, the hackers began raiding the data storage repositories used by some of the world’s largest corporations.

Among those was AT&T, which disclosed in July that cybercriminals had stolen personal information and phone and text message records for roughly 110 million people — nearly all of its customers. Wired.com reported in July that AT&T paid a hacker $370,000 to delete stolen phone records.

A report on the extortion attacks from the incident response firm Mandiant notes that Snowflake victim companies were privately approached by the hackers, who demanded a ransom in exchange for a promise not to sell or leak the stolen data. All told, more than 160 Snowflake customers were relieved of data, including TicketMasterLending TreeAdvance Auto Parts and Neiman Marcus.

Moucka is alleged to have used the hacker handles Judische and Waifu, among many others. These monikers correspond to a prolific cybercriminal whose exploits were the subject of a recent story published here about the overlap between Western, English-speaking cybercriminals and extremist groups that harass and extort minors into harming themselves or others.

On May 2, 2024, Judische claimed on the fraud-focused Telegram channel Star Chat that they had hacked Santander Bank, one of the first known Snowflake victims. Judische would repeat that claim in Star Chat on May 13 — the day before Santander publicly disclosed a data breach — and would periodically blurt out the names of other Snowflake victims before their data even went up for sale on the cybercrime forums.

404 Media reports that at a court hearing in Ontario this morning, Moucka called in from a prison phone and said he was seeking legal aid to hire an attorney.

TELECOM DOMINOES

Mandiant has attributed the Snowflake compromises to a group it calls “UNC5537,” with members based in North America and Turkey. Sources close to the investigation tell KrebsOnSecurity the UNC5537 member in Turkey is John Erin Binns, an elusive American man indicted by the U.S. Department of Justice (DOJ) for a 2021 breach at T-Mobile that exposed the personal information of at least 76.6 million customers.

Update: The Justice Department has unsealed an indictment (PDF) against Moucka and Binns, charging them with one count of conspiracy; 10 counts of wire fraud; four counts of computer fraud and abuse; two counts of extortion in relation to computer fraud; and two counts aggravated identity theft.

In a statement on Moucka’s arrest, Mandiant said UNC5537 aka Alexander ‘Connor’ Moucka has proven to be one of the most consequential threat actors of 2024.

“In April 2024, UNC5537 launched a campaign, systematically compromising misconfigured SaaS instances across over a hundred organizations,” wrote Austin Larsen, Mandiant’s senior threat analyst. “The operation, which left organizations reeling from significant data loss and extortion attempts, highlighted the alarming scale of harm an individual can cause using off-the-shelf tools.”

Sources involved in the investigation said UNC5537 has focused on hacking into telecommunications companies around the world. Those sources told KrebsOnSecurity that Binns and Judische are suspected of stealing data from India’s largest state-run telecommunications firm Bharat Sanchar Nigam Ltd (BNSL), and that the duo even bragged about being able to intercept or divert phone calls and text messages for a large portion of the population of India.

Judische appears to have outsourced the sale of databases from victim companies who refuse to pay, delegating some of that work to a cybercriminal who uses the nickname Kiberphant0m on multiple forums. In late May 2024, Kiberphant0m began advertising the sale of hundreds of gigabytes of data stolen from BSNL.

“Information is worth several million dollars but I’m selling for pretty cheap,” Kiberphant0m wrote of the BSNL data in a post on the English-language cybercrime community Breach Forums. “Negotiate a deal in Telegram.”

Also in May 2024, Kiberphant0m took to the Russian-language hacking forum XSS to sell more than 250 gigabytes of data stolen from an unnamed mobile telecom provider in Asia, including a database of all active customers and software allowing the sending of text messages to all customers.

On September 3, 2024, Kiberphant0m posted a sales thread on XSS titled “Selling American Telecom Access (100B+ Revenue).” Kiberphant0m’s asking price of $200,000 was apparently too high because they reposted the sales thread on Breach Forums a month later, with a headline that more clearly explained the data was stolen from Verizon‘s “push-to-talk” (PTT) customers — primarily U.S. government agencies and first responders.

404Media reported recently that the breach does not appear to impact the main consumer Verizon network. Rather, the hackers broke into a third party provider and stole data on Verizon’s PTT systems, which are a separate product marketed towards public sector agencies, enterprises, and small businesses to communicate internally.

INTERVIEW WITH JUDISCHE

Investigators say Moucka shared a home in Kitchener with other tenants, but not his family. His mother was born in Chechnya, and he speaks Russian in addition to French and English. Moucka’s father died of a drug overdose at age 26, when the defendant was roughly five years old.

A person claiming to be Judische began communicating with this author more than three months ago on Signal after KrebsOnSecurity started asking around about hacker nicknames previously used by Judische over the years.

Judische admitted to stealing and ransoming data from Snowflake customers, but he said he’s not interested in selling the information, and that others have done this with some of the data sets he stole.

“I’m not really someone that sells data unless it’s crypto [databases] or credit cards because they’re the only thing I can find buyers for that actually have money for the data,” Judische told KrebsOnSecurity. “The rest is just ransom.”

Judische has sent this reporter dozens of unsolicited and often profane messages from several different Signal accounts, all of which claimed to be an anonymous tipster sharing different identifying details for Judische. This appears to have been an elaborate effort by Judische to “detrace” his movements online and muddy the waters about his identity.

Judische frequently claimed he had unparalleled “opsec” or operational security, a term that refers to the ability to compartmentalize and obfuscate one’s tracks online. In an effort to show he was one step ahead of investigators, Judische shared information indicating someone had given him a Mandiant researcher’s assessment of who and where they thought he was. Mandiant says those were discussion points shared with select reporters in advance of the researcher’s recent talk at the LabsCon security conference.

But in a conversation with KrebsOnSecurity on October 26, Judische acknowledged it was likely that the authorities were closing in on him, and said he would seriously answer certain questions about his personal life.

“They’re coming after me for sure,” he said.

In several previous conversations, Judische referenced suffering from an unspecified personality disorder, and when pressed said he has a condition called “schizotypal personality disorder” (STPD).

According to the Cleveland Clinic, schizotypal personality disorder is marked by a consistent pattern of intense discomfort with relationships and social interactions: “People with STPD have unusual thoughts, speech and behaviors, which usually hinder their ability to form and maintain relationships.”

Judische said he was prescribed medication for his psychological issues, but that he doesn’t take his meds. Which might explain why he never leaves his home.

“I never go outside,” Judische allowed. “I’ve never had a friend or true relationship not online nor in person. I see people as vehicles to achieve my ends no matter how friendly I may seem on the surface, which you can see by how fast I discard people who are loyal or [that] I’ve known a long time.”

Judische later admitted he doesn’t have an official STPD diagnosis from a physician, but said he knows that he exhibits all the signs of someone with this condition.

“I can’t actually get diagnosed with that either,” Judische shared. “Most countries put you on lists and restrict you from certain things if you have it.”

Asked whether he has always lived at his current residence, Judische replied that he had to leave his hometown for his own safety.

“I can’t live safely where I’m from without getting robbed or arrested,” he said, without offering more details.

A source familiar with the investigation said Moucka previously lived in Quebec, which he allegedly fled after being charged with harassing others on the social network Discord.

Judische claims to have made at least $4 million in his Snowflake extortions. Judische said he and others frequently targeted business process outsourcing (BPO) companies, staffing firms that handle customer service for a wide range of organizations. They also went after managed service providers (MSPs) that oversee IT support and security for multiple companies, he claimed.

“Snowflake isn’t even the biggest BPO/MSP multi-company dataset on our networks, but what’s been exfiltrated from them is well over 100TB,” Judische bragged. “Only ones that don’t pay get disclosed (unless they disclose it themselves). A lot of them don’t even do their SEC filing and just pay us to fuck off.”

INTEL SECRETS

The other half of UNC5537 — 24-year-old John Erin Binns — was arrested in Turkey in late May 2024, and currently resides in a Turkish prison. However, it is unclear if Binns faces any immediate threat of extradition to the United States, where he is currently wanted on criminal hacking charges tied to the 2021 breach at T-Mobile.

A person familiar with the investigation said Binns’s application for Turkish citizenship was inexplicably approved after his incarceration, leading to speculation that Binns may have bought his way out of a sticky legal situation.

Under the Turkish constitution, a Turkish citizen cannot be extradited to a foreign state. Turkey has been criticized for its “golden passport” program, which provides citizenship and sanctuary for anyone willing to pay several hundred thousand dollars.

This is an image of a passport that Binns shared in one of many unsolicited emails to KrebsOnSecurity since 2021. Binns never explained why he sent this in Feb. 2023.

Binns’s alleged hacker alter egos — “IRDev” and “IntelSecrets” — were at once feared and revered on several cybercrime-focused Telegram communities, because he was known to possess a powerful weapon: A massive botnet. From reviewing the Telegram channels Binns frequented, we can see that others in those communities — including Judische — heavily relied on Binns and his botnet for a variety of cybercriminal purposes.

The IntelSecrets nickname corresponds to an individual who has claimed responsibility for modifying the source code for the Mirai “Internet of Things” botnet to create a variant known as “Satori,” and supplying it to others who used it for criminal gain and were later caught and prosecuted.

Since 2020, Binns has filed a flood of lawsuits naming various federal law enforcement officers and agencies — including the FBI, the CIA, and the U.S. Special Operations Command (PDF), demanding that the government turn over information collected about him and seeking restitution for his alleged kidnapping at the hands of the CIA.

Binns claims he was kidnapped in Turkey and subjected to various forms of psychological and physical torture. According to Binns, the U.S. Central Intelligence Agency (CIA) falsely told their counterparts in Turkey that he was a supporter or member of the Islamic State (ISIS), a claim he says led to his detention and torture by the Turkish authorities.

However, in a 2020 lawsuit he filed against the CIA, Binns himself acknowledged having visited a previously ISIS-controlled area of Syria prior to moving to Turkey in 2017.

A segment of a lawsuit Binns filed in 2020 against the CIA, in which he alleges U.S. put him on a terror watch list after he traveled to Syria in 2017.

Sources familiar with the investigation told KrebsOnSecurity that Binns was so paranoid about possible surveillance on him by American and Turkish intelligence agencies that his erratic behavior and online communications actually brought about the very government snooping that he feared.

In several online chats in late 2023 on Discord, IRDev lamented being lured into a law enforcement sting operation after trying to buy a rocket launcher online. A person close to the investigation confirmed that at the beginning of 2023, IRDev began making earnest inquiries about how to purchase a Stinger, an American-made portable weapon that operates as an infrared surface-to-air missile.

Sources told KrebsOnSecurity Binns’ repeated efforts to purchase the projectile earned him multiple visits from the Turkish authorities, who were justifiably curious why he kept seeking to acquire such a powerful weapon.

WAIFU

A careful study of Judische’s postings on Telegram and Discord since 2019 shows this user is more widely known under the nickname “Waifu,” a moniker that corresponds to one of the more accomplished “SIM swappers” in the English-language cybercrime community over the years.

SIM swapping involves phishing, tricking or bribing mobile phone company employees for credentials needed to redirect a target’s mobile phone number to a device the attackers control — allowing thieves to intercept incoming text messages and phone calls.

Several SIM-swapping channels on Telegram maintain a frequently updated leaderboard of the 100 richest SIM-swappers, as well as the hacker handles associated with specific cybercrime groups (Waifu is ranked #24). That list has long included Waifu on a roster of hackers for a group that called itself “Beige.”

The term “Beige Group” came up in reporting on two stories published here in 2020. The first was in an August 2020 piece called Voice Phishers Targeting Corporate VPNs, which warned that the COVID-19 epidemic had brought a wave of targeted voice phishing attacks that tried to trick work-at-home employees into providing access to their employers’ networks. Frequent targets of the Beige group included employees at numerous top U.S. banks, ISPs, and mobile phone providers.

The second time Beige Group was mentioned by sources was in reporting on a breach at the domain registrar GoDaddy. In November 2020, intruders thought to be associated with the Beige Group tricked a GoDaddy employee into installing malicious software, and with that access they were able to redirect the web and email traffic for multiple cryptocurrency trading platforms. Other frequent targets of the Beige group included employees at numerous top U.S. banks, ISPs, and mobile phone providers.

Judische’s various Telegram identities have long claimed involvement in the 2020 GoDaddy breach, and he didn’t deny his alleged role when asked directly. Judische said he prefers voice phishing or “vishing” attacks that result in the target installing data-stealing malware, as opposed to tricking the user into entering their username, password and one-time code.

“Most of my ops involve malware [because] credential access burns too fast,” Judische explained.

CRACKDOWN ON HARM GROUPS?

The Telegram channels that the Judische/Waifu accounts frequented over the years show this user divided their time between posting in channels dedicated to financial cybercrime, and harassing and stalking others in harm communities like Leak Society and Court.

Both of these Telegram communities are known for victimizing children through coordinated online campaigns of extortion, doxing, swatting and harassment. People affiliated with harm groups like Court and Leak Society will often recruit new members by lurking on gaming platforms, social media sites and mobile applications that are popular with young people, including DiscordMinecraftRobloxSteamTelegram, and Twitch.

“This type of offence usually starts with a direct message through gaming platforms and can move to more private chatrooms on other virtual platforms, typically one with video enabled features, where the conversation quickly becomes sexualized or violent,” warns a recent alert from the Royal Canadian Mounted Police (RCMP) about the rise of sextortion groups on social media channels.

“One of the tactics being used by these actors is sextortion, however, they are not using it to extract money or for sexual gratification,” the RCMP continued. “Instead they use it to further manipulate and control victims to produce more harmful and violent content as part of their ideological objectives and radicalization pathway.”

Some of the largest such known groups include those that go by the names 764, CVLT, Kaskar, 7997888429926996555Slit Town545404NMK303, and H3ll.

On the various cybercrime-oriented channels Judische frequented, he often lied about his or others’ involvement in various breaches. But Judische also at times shared nuggets of truth about his past, particularly when discussing the early history and membership of specific Telegram- and Discord-based cybercrime and harm groups.

Judische claimed in multiple chats, including on Leak Society and Court, that they were an early member of the Atomwaffen Division (AWD), a white supremacy group whose members are suspected of having committed multiple murders in the U.S. since 2017.

In 2019, KrebsOnSecurity exposed how a loose-knit group of neo-Nazis, some of whom were affiliated with AWD, had doxed and/or swatted nearly three dozen journalists at a range of media publications. Swatting involves communicating a false police report of a bomb threat or hostage situation and tricking authorities into sending a heavily armed police response to a targeted address.

Judsiche also told a fellow denizen of Court that years ago he was active in an older harm community called “RapeLash,” a truly vile Discord server known for attracting Atomwaffen members. A 2018 retrospective on RapeLash posted to the now defunct neo-Nazi forum Fascist Forge explains that RapeLash was awash in gory, violent images and child pornography.

A Fascist Forge member named “Huddy” recalled that RapeLash was the third incarnation of an extremist community also known as “FashWave,” short for Fascist Wave.

“I have no real knowledge of what happened with the intermediary phase known as ‘FashWave 2.0,’ but FashWave 3.0 houses multiple known Satanists and other degenerates connected with AWD, one of which got arrested on possession of child pornography charges, last I heard,” Huddy shared.

In June 2024, a Mandiant employee told Bloomberg that UNC5537 members have made death threats against cybersecurity experts investigating the hackers, and that in one case the group used artificial intelligence to create fake nude photos of a researcher to harass them.

Allison Nixon is chief research officer with the New York-based cybersecurity firm Unit 221B. Nixon is among several researchers who have faced harassment and specific threats of physical violence from Judische.

Nixon said Judische is likely to argue in court that his self-described psychological disorder(s) should somehow excuse his long career in cybercrime and in harming others.

“They ran a misinformation campaign in a sloppy attempt to cover up the hacking campaign,” Nixon said of Judische. “Coverups are an acknowledgment of guilt, which will undermine a mental illness defense in court. We expect that violent hackers from the [cybercrime community] will experience increasingly harsh sentences as the crackdown continues.”

5:34 p.m. ET: Updated story to include a clarification from Mandiant. Corrected Moucka’s age.

Nov. 21, 2024: Included link to a criminal indictment against Moucka and Binns.

365 TomorrowsBifurcation

Author: Majoki Her fingers stinging, Salda felt the chill and vastness of the late spring runoff as she sat upon a large stone in the middle of the river. High above her in the mountains, that same frigid water was a torrent muscling rock and soil relentlessly to carve deep channels. Channels that converged, then […]

The post Bifurcation appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Counting it All

Since it's election day in the US, many people are thinking about counting today. We frequently discuss counting here, and how to do it wrong, so let's look at some code from RK.

This code may not be counting votes, but whatever it's counting, we're not going to enjoy it:

case LogMode.Row_limit: // row limit excel = 65536 rows
    if (File.Exists(personalFolder + @"\" + fileName + ".CSV"))
    {
        using (StreamReader reader = new StreamReader(personalFolder + @"\" + fileName + ".CSV"))
        {
            countRows = reader.ReadToEnd().Split(new char[] { '\n' }).Length;
        }
    }

Now, this code is from a rather old application, originally released in 2007. So the comment about Excel's row limit really puts us in a moment in time- Excel 2007 raised the row limit to 1,000,000 rows. But older versions of Excel did cap out at 65,536. And it wasn't the case that everyone just up and switched to Excel 2007 when it came out- transitioning to the new Office file formats was a conversion which took years.

But we're not even reading an Excel file, we're reading a CSV.

I enjoy that we construct the name twice, because that's useful. But the real magic of this one is how we count the rows. Because while Excel can handle 65,536 rows at this time, I don't think this program is going to do a great job of it- because we read the entire file into memory with ReadToEnd, then Split on newlines, then count the length that way.

As you can imagine, in practice, this performed terribly on large files, of which there were many.

Unfortunately for RK, there's one rule about old, legacy code: don't touch it. So despite fixing this being a rather easy task, nobody is working on fixing it, because nobody wants to be the one who touched it last. Instead, management is promising to launch a greenfield replacement project any day now…

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

,

David BrinBalanced perspectives for our time - JUST in time?

Just before the consequential US election (I am optimistic we can prevail over Putinism), my previous posting offered a compiled packet of jpegs and quick bullets to use if you still have a marginally approachable, residually sane neighbor or relative who is 'sanity curious.' A truly comprehensive compendium! From the under-appreciated superb economy to proved dangers of pollution. From Ukraine to proof of Trump's religion-fakery. From saving science to ...

... the biggest single sentence of them all... "Almost every single honest adult who served under Trump now denounces him." Now numbering hundreds. 

And Harrison Ford emphasizing that point with eloquence.

Anyone able to ignore that central fact... that grownups who get to know Trump all despise him... truly is already a Kremlin boy.


== More sober reflections == 

Fareed Zakaria is by far the best pundit of our time - sharp, incisive, with well-balanced big-perspective. And yet, even he is myopic about what's going on.

On this occasion, he starts with The Economist's cover story that the U.S. economy is the "Envy of the World." 

Booming manufacturing and wages, record-low unemployment, the lowest inflation among industrial nations (now down to 2%), with democratic policies finally transferring money to the middle class, after 40 years of Supply Side ripoffs for the rich. 

The Wall Street Journal - of all capitalist and traditionally Republican outfits - calls the present economy 'superb at all levels' and 'remarkable,' with real growth in middle class wages and prosperity.

 And yet, many in the working classes now despise the Rooseveltean coalition that gave them everything, and even many black & hispanic males flock to Trump's macho ravings.

Zakaria is spot-on saying it's no longer about economics - not when good times can be taken for granted. Rather, it's social and cultural, propelled by visceral loathing of urban, college educated 'elites' by those who remain blue-collar, rural and macho. 

One result - amplified in media-masturbatory echo chambers and online Nuremberg Rallies - has been all-out war vs all fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.

Where Fareed gets it all wrong is in claiming this is something new!  

Elsewhere I point out the same cultural divide has erupted across all EIGHT different phases of the American civil/cultural war, since 1778. Moreover, farmers and blue collar workers, etc. have been traumatized for a century, in one crucial way! As their brightest sons and daughters rushed off from high school graduation to city/university lights...

... and then came back (if they ever come back at all) changed. 
It's been going on for 140 years. And the GI Bill after WWII accelerated it prodigiously.

I won't apologize for that... but I admit it's gotta hurt.

While sympathy is called-for, we need to recall that the recurring confederate fever is always puppetted by aristocrats - by King George, by slaver plantation lords, by gilded-age moguls, by inheritance brats and today's murder sheiks & Kremlin "ex"-commissars... and whenever the confederacy wins (as in 1830s, 1870s and 1920s in the United States and 1933 Germany) the results are stagnation and horror. And every "Union" victory (as in the 1770s, 1860s, 1940s, 1960s) is followed by both moral and palpable progress.

See also Fareed Zakaria's perspectives in his recently released book, Age of Revolutions: Progress and Backlash from 1600 to the Present.


== For this last week ==

Trump has learned a lesson from his time in office. Never trust any adults or women and men of accomplishment and stature. He has said clearly he will never have another Kelly, Mattis, Mullen, Milley... or even partisan hacks with some pride, like Barr, Pence, etc... allowed anywhere near the Oval Office. 

In fact, he wants many people in his potential administration who have criminal records and cannot get security clearances under present rules. He wants to have a private firm do background checks instead of the government and military security clearance process. 

This should give a bunch of corrupt or blackmail-vulnerable criminals access to and control over our most critical and sensitive secrets.

And anyone can doubt any longer that he is a Kremlin agent?


== A final note of wisdom ==

Only one method has ever been found that can often (not always) discover, interrogate and refute lies and liars or hallucinators.**

That method has been accountability via free-speech-empowered adversarial rivalry.  Almost all of our enlightenment institutions and accomplishments and freedoms rely upon it... Pericles and Adam Smith spoke of it and the U.S. Founders enshrined it...

...and the method is almost-never even remotely discussed in regards today's tsunamis of lies.

And even if things go super well in the Tuesday election, this basic truth must also shine light into the whole new problem/opportunity of Artificial Intelligence. (And I go into that elsewhere.) 

 It must... or we're still screwed.

---
** I openly invite adversarial refutation of this assertion.

------------------------------------------
------------------------------------------

Okay okay. You want prediction? I'll offer four scenarios:

1.     Harris and dems win big. They must, for the “steal” yammer-lies to fade to nothing, except for maybe a few McVeigh eruptions. (God bless the FBI undercover guys!) In this scenario, everyone but Putin soon realizes things are already pretty good in the US and West and getting better... and the many of our Republican neighbors – waking up from this insane trance – shake off confederatism and get back to loyally standing up for both America and enterprise. 


And perhaps the GOP will also shake away the heavily blackmail compromised portion of their upper castes and return to the pre-Hastert mission of negotiating sane conservative needs into a growing consensus.


2.     Harris squeaks in. We face 6 months of frantic Trumpian shrieks and Project 2025 ploys and desperate Kremlin plots and a tsunami of McVeighs.  (Again: God bless the FBI undercover guys!)  In this case, I will have a dozen ideas to present next week, to staunch the vile schemes of the Project 2025ers.


    In this case there will be confederate cries of "Secession!" over nothing real, as they had no real cause in 1861. We must answer "Okay fine this time. Off you go! Only we keep all military bases and especially we keep all of your blue cities (linking them with high speed rail), cities who get to secede from YOU!  Sell us beef and oil, till we make both obsolete! And you beg to be let back in.  Meanwhile, your brighter sons and daughters will still come over - with scholarships. So go in peace and God bless."


3.    Trump squeaks in and begins his reign of terror. We brace ourselves for the purge of all fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.  And within 6 months you will hear two words that I am speaking here for the 1st time: 


                    GENERAL STRIKE. 


    A legal and mammoth job action by those who actually know stuff and how to do stuff.  At which point then watch how redders realize how much they daily rely on our competence. And how quickly the oligarchs act to remove Trump, either through accelerating senility, or bribed retirement or... the Howard Beale scenario. At which point then Peter Thiel (briefly) owns America. It's Putin's dream outcome as the USA betrays Ukraine and Europe and the future... and tears itself apart. But no matter how painful, remember, we've recovered before. And we'll remember that you did this, Vlad and Peter. And those who empowered them.


    Oh, yes and this. Idiot believers in THE FOURTH TURNING will get their transformative 'crisis' that never had to happen and that they artificially caused (and we'll remember.) Above all, the Gen-Z 'hero generation' will know this. And you cultists will not like them, when they're mad.


    4. Trump landslide. Ain’t gonna happen. For one thing because Putin knows he won’t benefit if Trump is so empowered that he's freed from all puppet strings and blackmail threats. At which point Putin will suddenly realize he’s lost control - the way the German Junkers caste lords lost control in 1933, as portrayed at the end of CABARET. 

Still confused why Putin wouldn't want this? Watch Angela Lansbury’s chilling soliloquy near the end of THE MANCHURIAN CANDIDATE. This outcome is the one Putin should most fear. 

By comparison, Kamala would likely let Vlad live. But a fully empowered Trump will erase Putin,-- along with every other oligarch who ever commanded or extorted or humiliated him - like those depicted below. And the grease stains will smolder.


Again... here's your compiled ompendium of final ammo. To help us veer this back to victory for America, the planet, and the Union side in our recurring civil war... 

...followed by malice toward none and charity for all and a return to fraternal joy in being a light unto the world.