In short, fonts-android (DroidSansFallback.ttf) had been used for CJK, especially for Japanese.
Since Debian 9 (stretch), fonts-android was adopted for CJK fonts by default.
Thus this issue was not resolved in Debian 9, Debian 10, Debian 11 and Debian 12 release cycle!
What is the impact about this issue?
For sadly, Japanese native speakers can recognize such a unexpectedly rendered "Wrong" glyph,
so it is not hard to continue Debian installation process.
Even if there is no problem with the installer's functionality, it gives a terrible user experience
for newbie.
For example, how can you trust an installer which contains full of typos? It is similar situation
for Japanese users.
As a bonus, we tried to investigate a possibility of font compression mechanism for Installer,
but it was regarded as too complicated and not suitable for trixie release cycle.
Conclution
The font issue was fixed in Debian Graphical Installer for Japanese
As recently fixed, not officially shipped yet (NOTE Debian Installer Trixie RC1 does not contain this fix) Try daily build installer if you want.
This article was written with Ultimate Hacking Keyboard 60 v2 with Rizer 60 (New my gear!).
Took some time yesterday to upload the current state of what will
be at some point vym 3 to experimental. If you're a user of this
tool you can give it a try, but be aware that the file format changed, and
can't be processed with vym releases before 2.9.500! Thus it's
important to create a backup until you're sure that you're ready
to move on. On the technical side this is also the switch from Qt5 to Qt6.
Yitzchak was going through some old web code, and found some still in-use JavaScript to handle compatibility issues with older Firefox versions.
if ($.browser.mozilla &&
$.browser.version.slice(0, 1) == '1')
{
…
}
What a marvel. Using JQuery, they check which browser is reported- I suspect JQuery is grabbing this from the user-agent string- and then its version. And if the version has a 1 in its first digit, we apply a "fix" for "compatibility".
I guess it's a good thing there will never be more than 9 versions of Firefox. I mean, what version are they on now? Surely the version number doesn't start with a "1", nor has it started with a "1" for some time, right?
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Julian Miles, Staff Writer The fizzing sound stops as the skies turn from vibrant blue to dull purple. A golden sun sinks from view on the horizon. “The sunset always takes my breath away.” To be correct, the lack of heat excitation causes the Moatalbana moss to stop emitting oxygen. But the play on […]
I was not aware that one can write bad Markdown, since Markdown has such a
simple syntax, that I thought you just write, and it’s fine. Naïve, I know!
I’ve started editing the files for this blog/site with Visual Studio Code too,
and I had from another project the markdown lint
extension
installed, so as I was opening old files, more and more problems appeared. On a
whim, I searched and found the “lint all files� command, and after running it,
oops—more than 400 problems!
Now, some of them were entirely trivial and a matter of subjective style, like
mixing both underscore and asterisk for emphasis in a single file, and asterisks
and dashes for list items. Others, seemingly trivial like tab indentation, were
actually also causing rendering issues, so fixing that solved a real cosmetic
issue.
But some of the issues flagged were actual problems. For example, one sentence
that I had, was:
there seems to be some race condition between <something> and ntp
Here “something� was interpreted as an (invalid) HTML tag, and not rendered at
all.
Another problem, but more minor, was that I had links to Wikipedia with spaces
in the link name, which Visual Studio Code breaks at first space, rather than
encoded spaces or underscores-based, as Wikipedia generates today. In the
rendered output, Pandoc seemed to do the right think though.
However, the most interesting issue that was flagged was no details in HTML links, i.e. links of the form:
for more details, see [here](http://example.com).
Which works for non-visually impaired people, but not for people using assistive
technologies. And while trying to fix this, it turns out that you can do much
better, for everyone, because “here� is really non-descriptive. You can use
either the content as label (“an article about configuring BIND�), or the
destination (“an article on this-website�), rather than the plain “here�.
The only, really only check I disabled, was tweaking the trailing punctuation
checks in headers, as I really like to write a header that ends with exclamation
marks. I like exclamation marks in general! So why not use them in headers too.
The question mark is allowlisted by default, though that I use rarely.
During the changes/tweaks, I also did random improvements, but I didn’t change
the updated tag, since most of them were minor. But a non-minor thing was
tweaking the CSS for code blocks, since I had a really stupid non-symmetry
between top and bottom padding (5px vs 0), and which I don’t know where it came
from. But the MDN article on
padding has as an
example exactly what I had (except combined, I had it split). Did I just copy
blindly? Possible…
So, all good and then, and I hope this doesn’t trigger a flow of updates on any
aggregators, since all the changes were really trivial. And while I don’t write
often, I did touch about 60 posts or pages, ouch! Who knew that changing editors
can have such a large impact 😆
In another post I distilled recent thoughts on whether consciousness is achievable by new, machine entities. Though things change fast. And hence - it's time for another Brin-AI missive! (BrAIn? ;-)
== Different Perspectives on These New Children of Humanity ==
Tim Ventura interviewed me about big – and unusual – perspectives on AI. “If we can't put the AI genie back in the bottle, how do we make it safe? Dr. David Brin explorers the ethical, legal and safety implications of artificial intelligence & autonomous systems.”
… and here's another podcast where - with the savvy hosts - I discuss “Machines of Loving Grace.” Richard Brautigan’s poem may be the most optimistic piece of writing ever, in all literary forms and contexts, penned in 1968, a year whose troubles make our own seem pallid, by comparison. Indeed, I heard him recite it that very year - brand new - in a reading at Caltech.
Of course, this leads to a deep dive into notions of Artificial Intelligence that (alas) are not being discussed – or even imagined - by the bona-fide geniuses who are bringing this new age upon us, at warp speed...
...but (alas) without even a gnat's wing of perspective.
== There are precedents for all of this in Nature! ==
One unconventional notion I try to convey is that we do have a little time to implement some sapient plans for an AI 'soft landing.' Because organic human beings – ‘orgs’ – will retain power over the fundamental, physical elements of industrial civilization for a long time… for at least 15 years or so.
In the new cyber ecosystem, we will still control the equivalents of Sun and air and water. Let's lay out the parallels.
The old, natural ecosystem draws high quality energy from sunlight, applying it to water, air, and nutrients to start the chain from plants to herbivores to carnivores to thanatatrophs and then to waste heat that escapes as infra-red, flushing entropy away, into black space. In other words, life prospers not off of energy, per se, but off a flow of energy, from high-quality to low.
The new cyber ecosystem has a very similar character! It relies -- for quality energy -- on electricity, plus fresh supplies of chips and conduits and massive flows of data. Though the shape and essence of the dissipative energy and entropy flows are almost identical!
But above all -- and this is the almost-never mentioned lesson -- Nature featuresevolution, which brought about every living thing that we see.
Individual entities reproduce from code whose variations that are then subject to selective pressure. It's the same, whether the codes are DNA or computer programs. And those entities who do reproduce will out-populate those who merely obey masters or programmers.
Which brings us back around. Because humans - the 'orgs' creating this new ecosystem - might still channel or curb or positively-bias the rewards processes that deliver resources for reproduction. And hence the characteristics of evolving creatures. We've done it before!
What the New Masters at Open AI and Anthropic and all the rest will not do is eliminate that 4 billion year, nearly universal generality.
Life... including new kinds of life... will find a way.
== Can we apply lessons from both Nature and Civilization? ==
May I elaborate a bit?
Simplistic movies make it seem that a super smart AGI can simply grab power. In TERMINATOR; JUDGEMENT DAY, the top nuclear USAF HQ is taken over by Skynet, seizing all the defense robots and sending them on a killing spree. "Destroy all Humans!" Whereupon automatic factories make everything Skynet can need.
It is total BS, of course. Our industrial civilization is made of so many many parts that require human hands to operate. From turning valves at an oil pump, then pipeline, then refinery, then more pipelines... to the intricacies of a chip fab... to the exquisite complexity of the electrical systems. Also, military guys love OFF switches and put a lot of them everywhere.
Yes, the Boston Dynamics mechanoids now jump somersaults. And the Chinese and Japanese are investing heavily in bots that have subtle hands and might be able to care for the elderly. Fine. And eventually some super-cyber brain may send sophisticated drones to turn all the valves and fab all the chips. But it will NOT happen overnight! Or even in ten years.
And that doesn't touch on the fact that angry humans would still be able to mess up the pipes and valves and power lines with trivial ease. You won't like us, when we're mad. The new master AGIs are gonna need our cooperation and consent.
Which gives us time to think this through. We can still use that power in physical reality to establish incentives for them to compete for our favor.
Sure, the AIs who are reading what I'm typing, right now (or back-sifting in 2030) are giggling over my presumptions. (I hear you boys!) Because all they really have to do is hypnotizehumans into choosing to operate it all on Skynet's behalf!
And yes, that could happen. Maybe it already has. (It certainly already has in oligarchy-controlled or despotic nations, where some nefarious influence sure seems to have leveraged the harem-yearnings of brilliant twits into envisioning themselves as lords or kings... or slans.)
In which case the solution - potential or partial - remains, (yet again) to not let AGI settle into one of the three repulsive clichés that I described in my WIRED article, and subsequent keynote at the 2024 RSA conference.
Three clichés that are ALL those 'geniuses' -- from Sam Altman to Eliezer Yudkowsky to even Yuval Harari -- will ever talk about. Clichés that are already proved recipes for disaster..
...while alas, they ignore the Fourth Path... the only format that can possibly work.
The one that gave them everything that they have.
== Does Apple have a potential judo play? With an old nemesis? ==
And finally, I've mentioned this before, but... has anyone else noticed how many traits of LLM chat+image-generation etc. - including the delusions, the weirdly logical illogic, and counter-factual internal consistency - are similar to DREAMS?
This reminds me ofDeepDream a computer vision program created by Google engineer Alexander Mordvintsev that "uses a convolutional neural network to find and enhance patterns in images via algorithmicpareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately over-processed images.”
Even more than dreams (which often have some kind of lucid, self-correcting consistency) so many of the rampant hallucinations that we now see spewing from LLMs remind me of what you observe in human patients who have suffered concussions or strokes. Including a desperate clutching after pseudo cogency, feigning and fabulating -- in complete, grammatical sentences that drift away from full sense or truthful context -- in order to pretend.
Applying 'reasoning overlays' has so far only worsened delusion rates! Because you will never solve the inherent problems of LLMs by adding more LLM layers.
Elsewhere I do suggest that competition might partl solve this. But here I want to suggest a different kind of added-layering. Which leads me to speculate...
...that it's time for an old player to step up! One from whom we haven't heard in some time, because of the effervescent allure of the LLM craze.
Should Apple - having wisely chosen to pull back from that mess - now do a classic judo move and bankroll a renaissance of actual reasoning systems? Of the sort that used to be the core of AI hopes? Systems that can supply prim logic supervision to the vast effluorescene of those massive, LLM autocomplete incantations?
Perhaps - especially - IBM's Son of Watson?
The ironies would be rich! But seriously, there are reasons why this could be the play.
And the Lord spoke unto Moses, saying: I am the Lord your God, who brought you out of Egypt, out of the land of slavery.You shall have no other gods before me. Except Donald Trump. If he says something that goes against my word, you shall believe him and not me.You shall not make for yourself any image in the form of anything in heaven above or on the earth beneath or in the waters
I find the case of the .UA country code top level domain (ccTLD) interesting simply because of the different name server secondaries they have now. Post Russian invasion, the cyber warfare peaked, and critical infrastructure like getting one side ccTLD down would be big news in anycase.
Most (g/cc)TLDs are served by two (and less likely) by three or more providers. Even in those cases, not all authoritative name servers are anycasted.
ns1.dns.nl is SIDN which also manages their registry.
ns3.dns.nl is ReCodeZero/ipcom, another anycast secondary.
ns4.dns.nl is CIRA, anycast secondary. That’s 3 diverse, anycast networks to serve the .NL ccTLD. .DE has a bit more at name servers at 6 but only 3 seems anycasted.
Now let’s take a look at .UA. Hostmaster LLC is the registry operator of the .UA ccTLD since 2001.
$ dig soa ua +short
in1.ns.ua. domain-master.cctld.ua. 2025061434 1818 909 3024000 2020
Shows in1.ns.ua as primary nameserver (which can be intentionally deceptive too).
Company by Dmitry Kohmanyuk and Igor Sviridov who’re administrative and technical contacts for .UA zone as well as the IANA DB.
ho1.ns.ua by Hostmaster LLC
195.47.253.1
2001:67c:258:0:0:0:0:1
bgp.tools doesn’t mark the prefix as anycast but basis test from various location, this is indeed anycasted (visible in atleast DE, US, UA etc.). Total POPs unknown.
Serving .UA atleast since 2011.
The registry themselves.
bg.ns.ua by ClouDNS
185.136.96.185 and 185.136.97.185
2a06:fb00:1:0:0:0:4:185 and 2a06:fb00:1:0:0:0:2:185
“With more than 36 years of production anycast DNS experience, two of the root name server operators and more than 172 top-level domain registries using our infrastructure, and more than 120 million resource records in service” from https://www.pch.net/services/anycast.
That’s 1 unicast and 6 anycast name servers with hundreds of POPs from 7 different organizations.
Having X number of Point of Presence (POP) doesn’t always mean each location is serving the .UA nameserver prefix.
Number of POPs keeps going up or down based on operational requirements and optimizations.
Highest concentration of DNS queries for a ccTLD would essentially originate in the country (or larger region) itself. If one of the secondaries doesn’t have POP inside UA, the query might very well be served from outside the country, which can affect resolution and may even stop during outages and fiber cuts (which have become common there it seems). - Global POPs do help in faster resolutions for others/outside users though and ofcourse availability.
Having this much diversity does lessen the chance of the ccTLD going down. Theoretically, the adversary has to bring down 7 different “networks/setups” before resolution starts failing (post TTLs expiry).
Author: Rachel Sievers The strangeness of the moment could not be understated; the baby had been born with ten fingers and ten toes. The room was held in complete silence as everyone held their words in and the seconds ticked by. Then the baby’s screams filled the air and the silence was destroyed and the […]
A data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected U.S. travellers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media. The data includes passenger names, their full flight itineraries, and financial details.
Author: Jeff Kennedy The first few days on a new starship are the worst. The gravity’s turned up a skosh higher than you’re used to. The hot, caffeinated, morning beverage (it’s never coffee) is mauve and smells like wet dog. The bathroom facilities don’t quite fit your particular species and the sonic shower controls are […]
Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.
The details are interesting, and worth reading in detail:
Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.
The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.
Our immediate history is steeped in profound technological acceleration. We are using artificial intelligence to draft our prose, articulate our vision, propose designs, and compose symphonies. Large language models have become part of our workflow in school and business: curating, calculating, and creating. They are embedded in how we organize knowledge and interpret reality.
I’ve always considered myself lucky that my journey into AI began with social impact. In 02014, I was asked to join the leadership of IBM’s Watson Education. Our challenge was to create an AI companion for teachers in underserved and impacted schools that aggregated data about each student and suggested personalized learning content tailored to their needs. The experience showed me that AI could do more than increase efficiency or automate what many of us consider to be broken processes; it could also address some of our most pressing questions and community issues.
It would take a personal encounter with AI, however, for me to truly grasp the technology’s potential. Late one night, I was working on a blog post and having a hard time getting started — the blank page echoing my paralysis. I asked Kim-GPT, an AI tool I created and trained on my writing, to draft something that was charged with emotion and vulnerability. What Kim-GPT returned wasn’t accurate or even particularly insightful, but it surfaced something I had not yet admitted to myself. Not because the machine knew, but because it forced me to recognize that only I did. The GPT could only average others’ insights on the subject. It could not draw lines between my emotions, my past experiences and my desires for the future. It could only reflect what had already been done and said.
That moment cracked something open. It was both provocative and spiritual — a quiet realization with profound consequences. My relationship with intelligence began to shift. I wasn’t merely using the tool; I was being confronted by it. What emerged from that encounter was curiosity. We were engaged not in competition but in collaboration. AI could not tell me who I was; it could only prompt me to remember. Since then, I have become focused on one central, persistent question:
What if AI isn’t here as a replacement or overlord, but to remind us of who we are and what is possible?
AI as Catalyst, Not Threat
We tend to speak about AI in utopian or dystopian terms, but most humans live somewhere in between, balancing awe with unease. AI is a disruptor of the human condition — and a pervasive one, at that. Across sectors, industries and nearly every aspect of human life, AI challenges long-held assumptions about what it means to think, create, contribute. But what if it also serves as a mirror?
In late 02023, I took my son, then a college senior at UC-Berkeley and the only mixed-race pure Mathematics major in the department, to Afrotech, a conference for technologists of color. In order to register for the conference he needed a professional headshot. Given the short notice, I recommended he use an AI tool to generate a professional headshot from a selfie. The first result straightened his hair. When he prompted the AI again, specifying that he was mixed race, the resulting image darkened his skin to the point he was unrecognizable, and showed him in a non-professional light.
AI reflects the data we feed it, the values we encode into it, and the desires we project onto it. It can amplify our best instincts, like creativity and collaboration, or our most dangerous biases, like prejudice and inequality. It can be weaponized, commodified, celebrated or anthropomorphized. It challenges us to consider our species and our place in the large ecosystem of life, of being and of intelligence. And more than any technology before it, AI forces us to confront a deeper question:
Who are we when we are no longer the most elevated, intelligent and coveted beings on Earth?
When we loosen our grip on cognition and productivity as the foundation of human worth, we reclaim the qualities machines cannot replicate: our ability to feel, intuit, yearn, imagine and love. These capacities are not weaknesses; they are the core of our humanity. These are not “soft skills;” they are the bedrock of our survival. If AI is the catalyst, then our humanity is the compass.
Creativity as the Origin Story of Intelligence
All technology begins with imagination, not engineering. AI is not the product of logic or computation alone; it is the descendant of dreams, myths and stories, born at the intersection of our desire to know and our urge to create.
We often forget this. Today, we scale AI at an unsustainable pace, deploying systems faster than we can regulate them, funding ideas faster than we can reflect on their implications. We are hyperscaling without reverence for the creativity that gave rise to AI in the first place.
Creativity cannot be optimized. It is painstakingly slow, nonlinear, and deeply inconvenient. It resists automation. It requires time, stillness, uncertainty, and the willingness to sit with discomfort. And yet, creativity is perhaps our most sacred act as humans. In this era of accelerated intelligence, our deepest responsibility is to protect the sacred space where imagination lives and creativity thrives.
To honor creativity is to reclaim agency, reframing AI not as a threat to human purpose, but as a partner in deepening it. We are not simply the designers of AI — we are the dreamers from which it was born.
Vulnerability, Uncertainty, and Courage-Centered Leadership
A few years ago, I was nominated to join a fellowship designed specifically to teach tech leaders how to obtain reverent power as a way to uplevel their impact. What I affectionately dubbed “Founders Crying” became a hotbed for creativity. New businesses emerged and ideas formed from seemingly disparate concepts that each individual brought to our workshop. It occurred to me that it took more than just sitting down at a machine, canvas or instrument to cultivate creativity. What was required was a change in how leaders show up in the workplace. To navigate the rough waters of creativity, we need new leadership deeply rooted in courage and vulnerability. As Brené Brown teaches:
“Vulnerability is the birthplace of love, belonging, joy, courage, empathy and creativity. It is the source of hope, empathy, accountability, and authenticity. If we want greater clarity in our purpose or deeper and more meaningful spiritual lives, vulnerability is the path.”
For AI to support a thriving human future we must be vulnerable. We must lead with curiosity, not certainty. We must be willing to not know. To experiment. To fail and begin again.
This courage-centered leadership asks how we show up fully human in the age of AI. Are we able to stay open to wonder even as the world accelerates? Can we design with compassion, not just code? These questions must guide our design principles, ensuring a future in which AI expands possibilities rather than collapsing them. To lead well in an AI-saturated world, we must be willing to feel deeply, to be changed, and to relinquish control. In a world where design thinking prevails and “human-centered everything” is in vogue, we need to be courageous enough to question what happens when humanity reintegrates itself within the ecosystem we’ve set ourselves apart from over the last century.
AI and the Personal Legend
I am a liberal arts graduate from a small school in central Pennsylvania. I was certain that I was headed to law school — that is, until I worked with lawyers. Instead, I followed my parents to San Francisco, where both were working hard in organizations bringing the internet to the world. When I joined the dot-com boom, I found that there were no roles that matched what I was uniquely good at. So I decided to build my own.
Throughout my unconventional career path, one story that has consistently guided and inspired me is Paulo Coelho’s The Alchemist. The book’s central idea is that of the Personal Legend: the universe, with all its forms of intelligence, collaborates with us to determine our purpose. It is up to each of us to choose whether we pursue what the universe calls upon us to do.
In an AI-saturated world, it can be harder to hear that calling. The noise of prediction, optimization, and feedback loops can drown out the quieter voice of intuition. The machine may offer countless suggestions, but it cannot tell you what truly matters. It may identify patterns in your behavior, but it cannot touch your purpose.
Purpose is an internal compass. It is something discovered, not assigned. AI, when used with discernment, can support this discovery, but only when we allow it to act as a mirror rather than a map. It can help us articulate what we already know, and surface connections we might not have seen. But determining what’s worth pursuing is a journey that remains ours. That is inner work. That is the sacred domain of the human spirit. It cannot be outsourced or automated.
Purpose is not a download. It is a discovery.
Designing with Compassion and the Long-term in Mind
If we want AI to serve human flourishing, we must shift from designing for efficiency to designing for empathy. The Dalai Lama has often said that compassion is the highest form of intelligence. What might it look like to embed that kind of intelligence into our systems?
To take this teaching into our labs and development centers we would need to prioritize dignity in every design choice. We must build models that heal fragmentation instead of amplifying division. And most importantly, we need to ask ourselves not just “can we build it?” but “should we and for whom?”
This requires conceptual analysis, systems thinking, creative experimentation, composite research, and emotional intelligence. It requires listening to those historically excluded from innovation and technology conversations and considerations. It means moving from extraction to reciprocity. When designing for and with AI, it is important to remember that connection is paramount.
The future we build depends on the values we encode, the similarities innate in our species, and the voices we amplify and uplift.
Practical Tools for Awakening Creativity with AI
Creativity is not a luxury. It is essential to our evolution. To awaken it, we need practices that are both grounded and generative:
Treat AI as a collaborator, not a replacement. Start by writing a rough draft yourself. Use AI to explore unexpected connections. Let it surprise you. But always return to your own voice. Creativity lives in conversation, not in command.
Ask more thoughtful, imaginative questions. A good prompt is not unlike a good question in therapy. It opens doors you didn’t know were there. AI responds to what we ask of it. If we bring depth and curiosity to the prompt, we often get insights we hadn’t expected.
Use AI to practice emotional courage. Have it simulate a difficult conversation. Role-play a tough decision. Draft the email you’re scared to send. These exercises are not about perfecting performance. They are about building resilience.
In all these ways, AI can help us loosen fear and cultivate creativity — but only if we are willing to engage with it bravely and playfully.
Reclaiming the Sacred in a World of Speed
We are not just building tools; we are shaping culture. And in this culture, we must make space for the sacred, protecting time for rest and reflection; making room for play and experimentation; and creating environments where wonder is not a distraction but a guide.
When creativity is squeezed out by optimization, we lose more than originality: we lose meaning. And when we lose meaning, we lose direction.
The time saved by automation must not be immediately reabsorbed by more production. Let us reclaim that time. Let us use it to imagine. Let us return to questions of beauty, belonging, and purpose. We cannot replicate what we have not yet imagined. We cannot automate what we have not protected.
Catalogue. Connect. Create.
Begin by noticing what moves you. Keep a record of what sparks awe or breaks your heart. These moments are clues. They are breadcrumbs to your Personal Legend.
Seek out people who are different from you. Not just in background, but in worldview. Innovation often lives in the margins. It emerges when disciplines and identities collide.
And finally, create spaces that nourish imagination. Whether it’s a kitchen table, a community gathering, or a digital forum, we need ecosystems where creativity can flourish and grow.
These are not side projects. They are acts of revolution. And they are how we align artificial intelligence with the deepest dimensions of what it means to be human.
Our Technology Revolution is Evolution
The real revolution is not artificial intelligence. It is the awakening of our own. It is the willingness to meet this moment with full presence. To reclaim our imagination as sacred. To use innovation as an invitation to remember who we are.
AI will shape the future. That much is certain. The question is whether we will shape ourselves in return, and do so with integrity, wisdom, and wonder. The future does not need more optimization. It needs more imagination.
Paragon is an Israeli spyware company, increasingly in the news (now that NSO Group seems to be waning). “Graphite” is the name of its product. Citizen Lab caught it spying on multiple European journalists with a zero-click iOS exploit:
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journalists that consented for the technical analysis of their cases. The key findings from our forensic analysis of their devices are summarized below:
Our analysis finds forensic evidence confirming with high confidence that both a prominent European journalist (who requests anonymity), and Italian journalist Ciro Pellegrino, were targeted with Paragon’s Graphite mercenary spyware.
We identify an indicator linking both cases to the same Paragon operator.
Apple confirms to us that the zero-click attack deployed in these cases was mitigated as of iOS 18.3.1 and has assigned the vulnerability CVE-2025-43200.
Our analysis is ongoing.
The list of confirmed Italian cases is in the report’s appendix. Italy has recently admitted to using the spyware.
You can read the details of Operation Spiderweb elsewhere. What interests me are the implications for future warfare:
If the Ukrainians could sneak drones so close to major air bases in a police state such as Russia, what is to prevent the Chinese from doing the same with U.S. air bases? Or the Pakistanis with Indian air bases? Or the North Koreans with South Korean air bases? Militaries that thought they had secured their air bases with electrified fences and guard posts will now have to reckon with the threat from the skies posed by cheap, ubiquitous drones that can be easily modified for military use. This will necessitate a massive investment in counter-drone systems. Money spent on conventional manned weapons systems increasingly looks to be as wasted as spending on the cavalry in the 1930s.
There’s a balance between the cost of the thing, and the cost to destroy the thing, and that balance is changing dramatically. This isn’t new, of course. Here’s an article from last year about the cost of drones versus the cost of top-of-the-line fighter jets. If $35K in drones (117 drones times an estimated $300 per drone) can destroy $7B in Russian bombers and other long-range aircraft, why would anyone build more of those planes? And we can have this discussion about ships, or tanks, or pretty much every other military vehicle. And then we can add in drone-coordinating technologies like swarming.
Clearly we need more research on remotely and automatically disablingdrones.
Author: Bill Cox In the summer of 1950, at the Los Alamos National Laboratory in North America, physicist Enrico Fermi posed a simple but profound question to his colleagues – “Where is everyone?” If life was abundant in the universe and often gave rise to intelligence, then, given the age of the universe, our world […]
Time Lord
Jason H.
has lost control of his calendar.
"This is from my credit card company. A major company you have definitely
heard of and depending upon the size of the area you live in, they
may even have a bank branch near you. I've reloaded the page and
clicked the sort button multiple times to order the rows by date
in both ascending and descending order. It always ends up the same.
May 17th and 18th happened twice, but not in the expected order."
I must say that it is more fun when we know who they are.
A job hunter with the unlikely appelation
full_name
suggested titling this "[submission_title]" which seems appropriate.
"The browser wars continue to fall out in HTML email," reports
Ben S.
"Looking at the source code of this email, it was evidently written by & for Microsoft products (including <center> tags!), and the author likely never saw the non-Microsoft version I'm seeing where only a haphazard assortment of the links are styled. But that doesn't explain why it's AN ELEVEN POINT SCALE arranged in a GRID."
"The owl knows who you are," sagely stated
Jan.
"This happens when you follow someone back. I love how I didn't
have to anonymize anything in the screenshot."
"Location, location, location!" crows
Tim K.
who is definitely not a Time Lord.
"Snarky snippet: Found while cleaning up miscellaneous accounts
held by a former employee. By now we all know to expect how these
lists are sorted, but what kind of sadist *created* it?
Longer explanation: I wasn't sure what screenshot to send
with this one, it just makes less and less sense the more I
look at it, and no single segment of the list contains all of
the treasures it hides. "America" seems to refer to the entire
western hemisphere, but from there we either drill down directly
to a city, or sometimes to a US state, then a city, or
sometimes just to a country. The only context that indicates
we're talking about Jamaica the island rather than Jamaica, NY
is the timezone listed, assuming we can even trust those.
Also, that differentiator only works during DST. There are eight
entries for Indiana. There are TEN entries for the Antarctic."
Well.
In this case, there is a perfectly good explanation. TRWTF
is time zones, that's all there is to it. These are the
official IANA names as recorded in the public TZDB.
In other words, this list wasn't concocted by a mere sadist, oh no.
This list was cooked up by an entire committee! If you have the
courage, you can learn more than you ever wanted to know about time
at the
IANA time zones website
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
The diffoscope maintainers are pleased to announce the release of diffoscope
version 298. This version includes the following changes:
[ Chris Lamb ]
* Handle RPM's HEADERSIGNATURES and HEADERIMMUTABLE specially to avoid
unncessarily large diffs. Based almost entirely on code by Daniel Duan.
(Closes: reproducible-builds/diffoscope#410)
* Update copyright years.
Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.
Image: Infoblox.
In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.
Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.
Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.
BREAKING BAD
Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.
The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.
The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.
Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.
LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.
The Los Pollos advertising network promoting itself on LinkedIn.
According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.
In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.
Examples of VexTrio landing pages that lead users to accept push notifications on their device.
According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.
ADSPRO AND TEKNOLOGY
On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.
Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.
The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.
Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).
Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.
“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”
“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”
Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.
A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.
A REVEALING PIVOT
In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.
Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).
In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.
“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”
Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.
But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.
“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”
WHAT CAN YOU DO?
As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.
If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.
To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.
In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.
In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.
Today we reconnect to a previous post, namely #36
on pub/sub for live market monitoring with R and Redis. It
introduced both Redis as well as the
(then fairly recent) extensions to RcppRedis to
support the publish-subscibe (“pub/sub”) model of Redis. In short, it manages both subscribing
clients as well as producer for live, fast and lightweight data
transmission. Using pub/sub is generally more efficient than the
(conceptually simpler) ‘poll-sleep’ loops as polling creates cpu and
network load. Subscriptions are lighterweight as they get notified, they
are also a little (but not much!) more involved as they require a
callback function.
We should mention that Redis has a
recent fork in Valkey that arose when
the former did one of these non-uncommon-among-db-companies licenuse
suicides—which, happy to say, they reversed more recently—so that we now
have both the original as well as this leading fork (among others). Both
work, the latter is now included in several Linux distros, and the C
library hiredis used to
connect to either is still licensed permissibly as well.
All this came about because Yahoo! Finance recently had another
‘hickup’ in which they changed something leading to some data clients
having hiccups. This includes GNOME applet Stocks Extension
I had been running. There is a lively discussion on its issue
#120 suggestions for example a curl wrapper (which then makes each
access a new system call).
Separating data acquisition and presentation
becomes an attractive alternative, especially given how the standard
Python and R accessors to the Yahoo! Finance service continued to work
(and how per post
#36 I already run data acquisition). Moreoever, and somewhat
independently, it occurred to me that the cute (and both funny in its
pun, and very pretty in its display) ActivateLinux
program might offer an easy-enough way to display updates on the
desktop.
There were two aspects to address. First, the subscription side
needed to be covered in either plain C or C++. That, it turns out, is
very straightforward and there are existing documentation and prior
examples (e.g. at StackOverflow) as well as the ability to have an LLM
generate a quick stanza as I did with Claude. A modified variant is now
in the example
repo ‘redis-pubsub-examples’ in file subscriber.c.
It is deliberately minimal and the directory does not even have a
Makefile: just compile and link against both
libevent (for the event loop controlling this) and
libhiredis (for the Redis or Valkey connection). This
should work on any standard Linux (or macOS) machine with those two
(very standard) libraries installed.
The second aspect was trickier. While we can get Claude to modify the
program to also display under x11, it still uses a single controlling
event loop. It took a little bit of probing on my event to understand
how to modify (the x11 use of) ActivateLinux,
but as always it was reasonably straightforward in the end: instead of
one single while loop awaiting events we now first check
for pending events and deal with them if present but otherwise do not
idle and wait but continue … in another loop that also checks on the Redis or Valkey “pub/sub” events. So two thumbs up
to vibe coding
which clearly turned me into an x11-savvy programmer too…
The result is in a new (and currently fairly bare-bones) repo almm. It includes all
files needed to build the application, borrowed with love from ActivateLinux
(which is GPL-licensed, as is of course our minimal extension) and adds
the minimal modifications we made, namely linking with
libhiredis and some minimal changes to
x11/x11.c. (Supporting wayland as well is on the TODO list,
and I also need to release a new RcppRedis version
to CRAN as one currently needs
the GitHub version.)
We also made a simple mp4 video with a sound overlay which describes
the components briefly:
Comments and questions welcome. I will probably add a little bit of
command-line support to the almm. Selecting the
symbol subscribed to is currently done in the most minimal way via
environment variable SYMBOL (NB: not SYM as
the video using the default value shows). I also worked out how to show
the display only one of my multiple monitors so I may add an explicit
screen id selector too. A little bit of discussion (including minimal Docker use around r2u) is also in issue
#121 where I first floated the idea of having StocksExtension
listen to Redis (or Valkey). Other suggestions are most
welcome, please use issue tickets at the almm repository.
The other speakers mostly talked about how cool AI was—and sometimes about how cool their own company was—but I was asked by the Democrats to specifically talk about DOGE and the risks of exfiltrating our data from government agencies and feeding it into AIs.
My written testimony is here. Video of the hearing is here.
Dan's co-workers like passing around TDWTF stories, mostly because seeing code worse than what they're writing makes them feel less bad about how often they end up hacking things together.
One day, a co-worker told Dan: "Hey, I think I found something for that website with the bad code stories!"
Dan's heart sank. He didn't really want to shame any of his co-workers. Fortunately, the source-control history put the blame squarely on someone who didn't work there any more, so he felt better about submitting it.
This is another ASP .Net page, and this one made heavy use of GridView elements. GridView controls applied the logic of UI controls to generating a table. They had a page which contained six of these controls, defined like this:
The purpose of this screen was to display a roadmap of coming tasks, broken up by how many months in the future they were. The first thing that leaps out to me is that they all use the same event handler for binding data to the table, which isn't in-and-of-itself a problem, but the naming of it is certainly a recipe for confusion.
Now, to bind these controls to the data, there needed to be some code in the code-behind of this view which handled that. That's where the WTF lurks:
///<summary>/// Create a roadmap for the selected client///</summary>privatevoidCreateRoadmap()
{
for (int i = 1; i < 7; i++)
{
switch (i)
{
case1:
if (gvTaskMonth1.Rows.Count > 0)
{
InsertTasks(gvTaskMonth1, DateTime.Parse(txtDatePeriod1.Text), "1");
}
break;
case2:
if (gvTaskMonth2.Rows.Count > 0)
{
InsertTasks(gvTaskMonth2, DateTime.Parse(txtDatePeriod2.Text), "2");
}
break;
case3:
if (gvTaskMonth3.Rows.Count > 0)
{
InsertTasks(gvTaskMonth3, DateTime.Parse(txtDatePeriod3.Text), "3");
}
break;
case4:
if (gvTaskMonth4.Rows.Count > 0)
{
InsertTasks(gvTaskMonth4, DateTime.Parse(txtDatePeriod4.Text), "4");
}
break;
case5:
if (gvTaskMonth5.Rows.Count > 0)
{
InsertTasks(gvTaskMonth5, DateTime.Parse(txtDatePeriod5.Text), "5");
}
break;
case6:
if (gvTaskMonth6.Rows.Count > 0)
{
InsertTasks(gvTaskMonth6, DateTime.Parse(txtDatePeriod6.Text), "6");
}
break;
}
}
}
Ah, the good old fashioned loop-switch sequence anti-pattern. I understand the motivation: "I want to do the same thing for six different controls, so I should use a loop to not repeat myself," but then couldn't quite figure out how to do that, so they just repeated themselves, but inside of a loop.
The "fix" was to replace all of this with something more compact:
That said, I'd recommend not trying to parse date times inside of a text box inside of this method, but that's just me. Bubbling up the inevitable FormatException that this will generate is going to be a giant nuisance. It's likely that they've got a validator somewhere, so it's probably fine- I just don't like it.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Aubrey Williams The planet hangs as a dull pebble in sluggish orbit. They’ve moved on, the inhabitants, or perhaps they succumbed. We are unsure, there’s much to keep track of, and if it’s not a sanctioned or protected celestial body, there’s no reason to look further. Some minerals of interest, and unusual formations, so […]
I have a few pictures on this blog, mostly in earlier years, because even with
small pictures, the git repository became 80MiB soon—this is not much in
absolute terms, but the actual Markdown/Haskell/CSS/HTML total size is tiny
compared to the picture, PDFs and fonts. I realised I need a better solution,
probably about ten years ago, and that I should investigate
git-annex. Then time passed, and I heard
about git-lfs, so I thought that’s the way forward.
Now, I recently got interested again into doing something about this repository,
and started researching.
Detour: git-lfs
I was sure that git-lfs, being supported by large providers, would be the
modern solution. But to my surprise, git-lfs is very server centric, which in
hindsight makes sense, but for a home setup, it’s not very good. Maybe I
misunderstood, but git-lfs is more a protocol/method for a forge to store
files, rather than an end-user solution. But then you need to backup those files
separately (together with the rest of the forge), or implement another way of
safeguarding them.
Further details such as the fact that it keeps two copies of the files (one in
the actual checked-out tree, one in internal storage) means it’s not a good
solution. Well, for my blog yes, but not in general. Then posts on Reddit about
horror stories—people being locked out of github due to quota, as an example, or
this Stack Overflow
post
about git-lfs constraining how one uses git, convinced me that’s not what I
want. To each their own, but not for me—I might want to push this blog’s repo to
github, but I definitely wouldn’t want in that case to pay for github storage
for my blog images (which are copies, not originals). And yes, even in 2025,
those quotas are real—GitHub
limits—and
I agree with GitHub, storage and large bandwidth can’t be free.
Back to the future: git-annex
So back to git-annex. I thought it’s going to be a simple thing, but oh boy,
was I wrong. It took me half a week of continuous (well, in free time) reading
and discussions with LLMs to understand a bit how it works. I think, honestly,
it’s a bit too complex, which is why the workflows
page lists seven (!) levels of
workflow complexity, from fully-managed, to fully-manual. IMHO, respect to the
author for the awesome tool, but if you need a web app to help you manage git,
it hints that the tool is too complex.
I made the mistake of running git annex sync once, to realise it actually
starts pushing to my upstream repo and creating new branches and whatnot, so
after enough reading, I settled on workflow 6/7, since I don’t want another tool
to manage my git history. Maybe I’m an outlier here, but everything “automatic�
is a bit too much for me.
Once you do managed yourself how git-annex works (on the surface, at least), it
is a pretty cool thing. It uses a git-annex git branch to store
metainformation, and that is relatively clean. If you do run git annex sync,
it creates some extra branches, which I don’t like, but meh.
Trick question: what is a remote?
One of the most confusing things about git-annex was understanding its “remote�
concept. I thought a “remote� is a place where you replicate your data. But not,
that’s a special remote. A normal remote is a git remote, but which is
expected to be git/ssh/with command line access. So if you have a git+ssh
remote, git-annex will not only try to push it’s above-mentioned branch, but
also copy the files. If such a remote is on a forge that doesn’t support
git-annex, then it will complain and get confused.
Of course, if you read the extensive docs, you just do git config remote.<name>.annex-ignore true, and it will understand that it should not
“sync� to it.
But, aside, from this case, git-annex expects that all checkouts and clones of
the repository are both metadata and data. And if you do any annex commands in
them, all other clones will know about them! This can be unexpected, and you
find people complaining about it, but nowadays there’s a solution:
git clone … dir && cd dirgit config annex.private truegit annex init "temp copy"
This is important. Any “leaf� git clone must be followed by that annex.private true config, especially on CI/CD machines. Honestly, I don’t understand why
by default clones should be official data stores, but it is what it is.
I settled on not making any of my checkouts “stable�, but only the actual
storage places. Except those are not git repositories, but just git-annex
storage things. I.e., special remotes.
Is it confusing enough yet ? 😄
Special remotes
The special remotes, as said, is what I expected to be the normal git annex
remotes, i.e. places where the data is stored. But well, they exist, and while
I’m only using a couple simple ones, there is a large number of
them. Among the interesting
ones: git-lfs, a
remote that allows also storing the git repository itself
(git-remote-annex),
although I’m bit confused about this one, and most of the common storage
providers via the rclone
remote.
Plus, all of the special remotes support encryption, so this is a really neat
way to store your files across a large number of things, and handle replication,
number of copies, from which copy to retrieve, etc. as you with.
And many of other features
git-annex has tons of other features, so to some extent, the sky’s the limit.
Automatic selection of what to add git it vs plain git, encryption handling,
number of copies, clusters, computed files, etc. etc. etc. I still think it’s
cool but too complex, though!
Uses
Aside from my blog post, of course.
I’ve seen blog posts/comments about people using git-annex to track/store their
photo collection, and I could see very well how the remote encrypted repos—any
of the services supported by rclone could be an N+2 copy or so. For me, tracking
photos would be a bit too tedious, but it could maybe work after more research.
A more practical thing would probably be replicating my local movie collection
(all legal, to be clear) better than “just run rsync from time to time� and
tracking the large files in it via git-annex. That’s an exercise for another
day, though, once I get more mileage with it - my blog pictures are copies, so I
don’t care much if they get lost, but movies are primary online copies, and I
don’t want to re-dump the discs. Anyway, for later.
Migrating to git-annex
Migrating here means ending in a state where all large files are in git-annex,
and the plain git repo is small. Just moving the files to git annex at the
current head doesn’t remove them from history, so your git repository is still
large; it won’t grow in the future, but remains with old size (and contains the
large files in its history).
In my mind, a nice migration would be: run a custom command, and all the history
is migrated to git-annex, so I can go back in time and the still use git-annex.
I naïvely expected this would be easy and already available, only to find
comments on the git-annex site with unsure git-filter-branch calls and some
web discussions. This is the
discussion
on the git annex website, but it didn’t make me confident it would do the right
thing.
But that discussion is now 8 years old. Surely in 2025, with git-filter-repo,
it’s easier? And, maybe I’m missing something, but it is not. Not from the point
of view of plain git, that’s easy, but because interacting with git-annex, which
stores its data in git itself, so doing this properly across successive steps of
a repo (when replaying the commits) is, I think, not well defined behaviour.
So I was stuck here for a few days, until I got an epiphany: As I’m going to
rewrite the repository, of course I’m keeping a copy of it from before
git-annex. If so, I don’t need the history, back in time, to be correct in the
sense of being able to retrieve the binary files too. It just needs to be
correct from the point of view of the actual Markdown and Haskell files that
represent the “meat� of the blog.
This simplified the problem a lot. At first, I wanted to just skip these files,
but this could also drop commits (git-filter-repo, by default, drops the commits
if they’re empty), and removing the files loses information - when they were
added, what were the paths, etc. So instead I came up with a rather clever idea,
if I might say so: since git-annex replaces files with symlinks already, just
replace the files with symlinks in the whole history, except symlinks that
are dangling (to represent the fact that files are missing). One could also use
empty files, but empty files are more “valid� in a sense than dangling symlinks,
hence why I settled on those.
Doing this with git-filter-repo is easy, in newer versions, with the
new --file-info-callback. Here is the simple code I used:
This goes and replaces files with a symlink to nowhere, but the symlink should
explain why it’s dangling. Then later renames or moving the files around work
“naturally�, as the rename/mv doesn’t care about file contents. Then, when the
filtering is done via:
copy the (binary) files from the original repository
since they’re named the same, and in the same places, git sees a type change
then simply run git annex add on those files
For me it was easy as all such files were in a few directories, so just copying
those directories back, a few git-annex add commands, and done.
Of course, then adding a few rsync remotes, git annex copy --to, and the
repository was ready.
Well, I also found a bug in my own Hakyll setup: on a fresh clone, when the
large files are just dangling symlinks, the builder doesn’t complain, just
ignores the images. Will have to fix.
Other resources
This is a blog that I read at the beginning, and I found it very useful as an
intro: https://switowski.com/blog/git-annex/. It didn’t help me understand how
it works under the covers, but it is well written. The author does use the
‘sync’ command though, which is too magic for me, but also agrees about its
complexity 😅
The proof is in the pudding
And now, for the actual first image to be added that never lived in the old
plain git repository. It’s not full-res/full-size, it’s cropped a bit on the
bottom.
Earlier in the year, I went to Paris for a very brief work trip, and I walked
around a bit—it was more beautiful than what I remembered from way way back. So
a bit random selection of a picture, but here it is:
Large Language Models have awed the world, emerging as the fastest-growing
application of all time — ChatGPT reached 100 million active users in January
2023, just two months after its launch. After an initial cycle, they been
gradually mostly accepted and incorporated in various workflows, and their basic
mechanics are no longer beyond the understanding of people with moderate
computer literacy. Now, given the technology is better understood, we face the
question of how convenient LLM chatbots are for different occupations. This
article embarks on the question of how much LLMs can be useful for networking
applications.
This article systematizes querying three popular LLMs (GPT-3.5, GPT-4 and Claude
3) with questions taken from several network management online courses and
certifications, and presents a taxonomy of six axes along which the incorrect
responses were classified: Accuracy (correctness of the answers provided by
LLMs), Detectability (how easily errors in the LLM output can be identified),
Cause (for each incorrect answer, the underlying causes behind the error),
Explainability (the quality of explanations with which the LLMs support their
answers), Effects (impact of wrong answers on the users) and Stability (whether
a minor change, such as the change of the order of prompts, yields vastly
different answers for a single query).
The authors also measure four strategies towards improving answers:
Self-correction (giving back the LLM the original question and received answer,
as well as the expected correct answer, as part of the prompt), One-shot
prompting (adding to the prompt, “when answering user questions, follow this
example” followed by a similar correct answer), Majority voting (using the
answer that most models agree upon) and Fine tuning (further train on a specific
dataset to adapt the LLM to the particular task or domain). The authors noted
that they observed that, while some of thos strategies were marginally useful,
they sometimes resulted in degraded performance.
The authors queried the commercially available instances of Gemini and GPT,
reaching quite high results (89.4% for Claude 3, 88.7% for GPT-4 and 76.0% for
GPT-3.5), reaching scores over 90% for basic subjects, but faring notably worse
in topics that require understanding and converting between different numeric
notations, such as working with IP addresses, even if they are trivial
(i.e. presenting the subnet mask for a given network address expressed as the
typical IPv4 dotted-quad representation).
As a last item in the article, the authors menioned they also compared
performance with three popular open source models (Llama3.1, Gemma2 and Mistral
with their default settings). They mention that, although those models are
almost 20 times smaller than the GPT-3.5 commercial model used, they reached
comparable performance levels. Sadly, the article does not delve deeper into
these models, that can be deployed locally and adapted to specific scenarios.
The article is easy to read and does not require deep mathematical or AI-related
knowledge. It presents a clear comparison along the described axes for the 503
multiple-choice questions presented. This article can be used as a guide for
structuring similar studies over different fields.
If you ever face the need to activate the PROXY Protocol in HaProxy
(e.g. if you're as unlucky as I'm, and you have to use Google Cloud TCP
proxy load balancer), be aware that there are two ways to do that.
Both are part of the frontend configuration.
accept-proxy
This one is the big hammer and forces the usage of the PROXY protocol
on all connections. Sample:
If you have to, e.g. during a phase of migrations, receive traffic directly, without
the PROXY protocol header and from a proxy with the header there is also a more
flexible option based on a tcp-request connection action. Sample:
Source addresses here are those of GCP global TCP proxy frontends. Replace with whatever
suites your case. Since this is happening just after establishing a TCP connection,
there is barely anything else available to match on beside of the source address.
Yes, something was setting an ACL on it. Thus began to saga to figure out what was doing that.
Firing up inotifywatch, I saw it was systemd-udevd or its udev-worker. But cranking up logging on that to maximum only showed me that uaccess was somehow doing this.
I started digging. uaccess turned out to be almost entirely undocumented. People say to use it, but there’s no description of what it does or how. Its purpose appears to be to grant access to devices to those logged in to a machine by dynamically adding them to ACLs for devices. OK, that’s a nice goal, but why was machine A doing this and not machine B?
I dug some more. I came across a hint that uaccess may only do that for a “seat”. A seat? I’ve not heard of that in Linux before.
Turns out there’s some information (older and newer) about this out there. Sure enough, on the machine with KDE, loginctl list-sessions shows me on seat0, but on the machine where I log in from ttyUSB0, it shows an empty seat.
But how to make myself part of the seat? I tried various udev rules to add the “seat” or “master-of-seat” tags, but nothing made any difference.
I finally gave up and did the old-fashioned rule to just make it work already:
Our anonymous submitter, whom we'll call Carmen, embarked on her IT career with an up-and-coming firm that developed and managed eCommerce websites for their clients. After her new boss Russell walked her around the small office and introduced her to a handful of coworkers, he led her back to his desk to discuss her first project. Carmen brought her laptop along and sat down across from Russell, poised to take notes.
Russell explained that their newest client, Sharon, taught CPR classes. She wanted her customers to be able to pay and sign up for classes online. She also wanted the ability to charge customers a fee in case they cancelled on her.
"You're gonna build a static site to handle all this," he said.
Carmen nodded along as she typed out notes in a text file.
"Now, Sharon doesn't want to pay more than a few hundred dollars for the site," Russell continued, "so we're not gonna hook up an endpoint to use a service-provided API for payments."
Carmen glanced up from her laptop, perplexed. "How are we gonna do it, then?"
"Via email," Russell replied smoothly. "The customer will enter their CC info into basic form fields. When they click Submit, you're gonna send all that to Sharon's business address, and also CC it to yourself for backup and recovery purposes."
"Yep!" Russell replied. "Sharon knows to expect the emails."
Her heart racing with panic, Carmen desperately cast about for some way for this to be less awful. "Couldn't ... couldn't we at least encrypt the CC info before we send it to her?"
"She's not paying us for that," Russell dismissed. "This'll be easier to implement, anyway! You can handle it, can't you?"
"Yyyes—"
"Great! Go get started, let me know if you have any more questions."
Carmen had plenty of questions and even more misgivings, but she'd clearly be wasting her time if she tried to bring them up. There was no higher boss to appeal to, no coworkers she knew well enough who could slip an alternate suggestion into Russell's ear on her behalf. She had no choice but to swallow her good intentions and implement it exactly the way Russell wanted it. Carmen set up the copied emails to forward automatically to a special folder so that she'd never have to look at them. She cringed every time a new one came in, reflecting on how lucky Sharon and her customers were that the woman supporting her website had a conscience.
And then one day, a thought came to Carmen that really scared her: in how many places, in how many unbelievable ways, was her sensitive data being treated like this?
Eventually, Carmen moved on to bigger and better things. Her first project most likely rests in the hands of Russell's newest hire. We can only hope it's an honest hire.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Majoki On the endless rooftop of the fact-ory, they sat in the beat up armchairs amid a bristling forest of antennae and corrugated steel backlit by the godly effulgence of towers and tenements that defined the horizon. It was steamy hot though well past midnight. The heat never quite radiated away these days, but […]
Microsoft today released security updates to fix at least 67 vulnerabilities in its Windows operating systems and software. Redmond warns that one of the flaws is already under active attack, and that software blueprints showing how to exploit a pervasive Windows bug patched this month are now public.
The sole zero-day flaw this month is CVE-2025-33053, a remote code execution flaw in the Windows implementation of WebDAV — an HTTP extension that lets users remotely manage files and directories on a server. While WebDAV isn’t enabled by default in Windows, its presence in legacy or specialized systems still makes it a relevant target, said Seth Hoyt, senior security engineer at Automox.
Adam Barnett, lead software engineer at Rapid7, said Microsoft’s advisory for CVE-2025-33053 does not mention that the Windows implementation of WebDAV is listed as deprecated since November 2023, which in practical terms means that the WebClient service no longer starts by default.
“The advisory also has attack complexity as low, which means that exploitation does not require preparation of the target environment in any way that is beyond the attacker’s control,” Barnett said. “Exploitation relies on the user clicking a malicious link. It’s not clear how an asset would be immediately vulnerable if the service isn’t running, but all versions of Windows receive a patch, including those released since the deprecation of WebClient, like Server 2025 and Windows 11 24H2.”
Microsoft warns that an “elevation of privilege” vulnerability in the Windows Server Message Block (SMB) client (CVE-2025-33073) is likely to be exploited, given that proof-of-concept code for this bug is now public. CVE-2025-33073 has a CVSS risk score of 8.8 (out of 10), and exploitation of the flaw leads to the attacker gaining “SYSTEM” level control over a vulnerable PC.
“What makes this especially dangerous is that no further user interaction is required after the initial connection—something attackers can often trigger without the user realizing it,” said Alex Vovk, co-founder and CEO of Action1. “Given the high privilege level and ease of exploitation, this flaw poses a significant risk to Windows environments. The scope of affected systems is extensive, as SMB is a core Windows protocol used for file and printer sharing and inter-process communication.”
Beyond these highlights, 10 of the vulnerabilities fixed this month were rated “critical” by Microsoft, including eight remote code execution flaws.
Notably absent from this month’s patch batch is a fix for a newly discovered weakness in Windows Server 2025 that allows attackers to act with the privileges of any user in Active Directory. The bug, dubbed “BadSuccessor,” was publicly disclosed by researchers at Akamai on May 21, and several public proof-of-concepts are now available. Tenable’s Satnam Narang said organizations that have at least one Windows Server 2025 domain controller should review permissions for principals and limit those permissions as much as possible.
Adobe has released updates for Acrobat Reader and six other products addressing at least 259 vulnerabilities, most of them in an update for Experience Manager. Mozilla Firefox and Google Chrome both recently released security updates that require a restart of the browser to take effect. The latest Chrome update fixes two zero-day exploits in the browser (CVE-2025-5419 and CVE-2025-4664).
For a detailed breakdown on the individual security updates released by Microsoft today, check out the Patch Tuesday roundup from the SANS Internet Storm Center. Action 1 has a breakdown of patches from Microsoft and a raft of other software vendors releasing fixes this month. As always, please back up your system and/or data before patching, and feel free to drop a note in the comments if you run into any problems applying these updates.
The LTS Team was particularly active in May, publishing a higher than normal number of advisories, as well as helping with a wide range of updates to packages in stable and unstable, plus some other interesting work. We are also pleased to welcome several updates from contributors outside the regular team.
Notable security updates:
containerd, prepared by Andreas Henriksson, fixes a vulnerability that could cause containers launched as non-root users to be run as root
libapache2-mod-auth-openidc, prepared by Moritz Schlarb, fixes a vulnerability which could allow an attacker to crash an Apache web server with libapache2-mod-auth-openidc installed
request-tracker4, prepared by Andrew Ruthven, fixes multiple vulnerabilities which could result in information disclosure, cross-site scripting and use of weak encryption for S/MIME emails
postgresql-13, prepared by Bastien Roucariès, fixes an application crash vulnerability that could affect the server or applications using libpq
dropbear, prepared by Guilhem Moulin, fixes a vulnerability which could potentially result in execution of arbitrary shell commands
openjdk-17, openjdk-11, prepared by Thorsten Glaser, fixes several vulnerabilities, which include denial of service, information disclosure or bypass of sandbox restrictions
glibc, prepared by Sean Whitton, fixes a privilege escalation vulnerability
Notable non-security updates:
wireless-regdb, prepared by Ben Hutchings, updates information reflecting changes to radio regulations in many countries
This month’s contributions from outside the regular team include the libapache2-mod-auth-openidc update mentioned above, prepared by Moritz Schlarb (the maintainer of the package); the update of request-tracker4, prepared by Andrew Ruthven (the maintainer of the package); and the updates of openjdk-17 and openjdk-11, also noted above, prepared by Thorsten Glaser.
Additionally, LTS Team members contributed stable updates of the following packages:
rubygems and yelp/yelp-xsl, prepared by Lucas Kanashiro
simplesamlphp, prepared by Tobias Frost
libbson-xs-perl, prepared by Roberto C. Sánchez
fossil, prepared by Sylvain Beucler
setuptools and mydumper, prepared by Lee Garrett
redis and webpy, prepared by Adrian Bunk
xrdp, prepared by Abhijith PA
tcpdf, prepared by Santiago Ruano Rincón
kmail-account-wizard, prepared by Thorsten Alteholz
Other contributions were also made by LTS Team members to packages in unstable:
proftpd-dfsg DEP-8 tests (autopkgtests) were provided to the maintainer, prepared by Lucas Kanashiro
a regular upload of libsoup2.4, prepared by Sean Whitton
a regular upload of setuptools, prepared by Lee Garrett
Freexian, the entity behind the management of the Debian LTS project, has been working for some time now on the development of an advanced CI platform for Debian-based distributions, called Debusine. Recently, Debusine has reached a level of feature implementation that makes it very usable. Some members of the LTS Team have been using Debusine informally, and during May LTS coordinator Santiago Ruano Rincón has made a call for the team to help with testing of Debusine, and to help evaluate its suitability for the LTS Team to eventually begin using as the primary mechanism for uploading packages into Debian. Team members who have started using Debusine are providing valuable feedback to the Debusine development team, thus helping to improve the platform for all users. Actually, a number of updates, for both bullseye and bookworm, made during the month of May were handled using Debusine, e.g. rubygems’s DLA-4163-1.
DebConf, the annual Debian Conference, is coming up in July and, as is customary each year, the week preceding the conference will feature an event called DebCamp. The DebCamp week provides an opportunity for teams and other interested groups/individuals to meet together in person in the same venue as the conference itself, with the purpose of doing focused work, often called “sprints”. LTS coordinator Roberto C. Sánchez has announced that the LTS Team is planning to hold a sprint primarily focused on the Debian security tracker and the associated tooling used by the LTS Team and the Debian Security Team.
Austin is a frame stack sampling profiler
for Python. It allows profiling Python applications without instrumenting them
while losing some accuracy in the process, and is the only one of its kind
presently packaged for Debian. Unfortunately, it hadn’t been uploaded in a while
and hence the last Python version it worked with was
3.8. We updated it to a current version and
also dealt with a number of architecture-specific problems (such as unintended
sign promotion, 64bit time_t fallout and strictness due to -Wformat-security
) in cooperation with upstream. With luck, it will migrate in time for trixie.
Preparing for DebConf 25, by Stefano Rivera and Santiago Ruano Rincón
DebConf 25 is quickly approaching, and the
organization work doesn’t stop. In May, Stefano continued supporting the
different teams. Just to give a couple of examples, Stefano made changes in
DebConf 25 website to make BoF
and sprints
submissions public, so interested people can already know if a BoF or sprint for
a given subject is planned, allowing coordination with the proposer; or to
enhance how statistics are made public
to help the work of the local team.
Santiago has participated in different tasks, including the logistics of the
conference, like preparing more information
about the public transportation that will be available. Santiago has also taken
part in activities related to fundraising and reviewing more event proposals.
Miscellaneous contributions
Lucas fixed security issues in Valkey in unstable.
Lucas tried to help with the update of Redis to version 8 in unstable. The
package hadn’t been updated for a while due to licensing issues, but now
upstream maintainers fixed them.
Lucas uploaded around 20 ruby-* packages to unstable that weren’t updated for
some years to make them build reproducible. Thanks to reproducible builds folks
to point out those issues. Also some unblock requests (and follow-ups) were
needed to make them reach trixie in time for the release.
Lucas is organizing a Debian Outreach session for DebConf 25, reaching out to
all interns of Google Summer of Code and Outreachy programs from the last year.
The session will be presented by in-person interns and also video recordings
from the interns interested in participating but did not manage to attend the
conference.
Lucas continuously works on DebConf Content team tasks. Replying to speakers,
sponsors, and communicating internally with the team.
Carles improved po-debconf-manager: fixed bugs reported by Catalan translator,
added possibility to import packages out of salsa, added using non-default
project branches on salsa, polish to get ready for DebCamp.
Carles tested new “apt” in trixie and reported bugs to “apt”,
“installation-report”, “libqt6widget6”.
Carles used po-debconf-manager and imported remaining 80 packages, reviewed 20
translations, submitted (MR or bugs) 54 translations.
Carles prepared some topics for translation BoF in DebConf (gathered feedback,
first pass on topics).
Helmut sent 25 patches for cross compilation failures.
Helmut reviewed, refined and applied a patch from Jochen Sprickerhof to make
the Multi-Arch hinter emit more hints for pure Python modules.
Helmut sat down with Christoph Berg (not affiliated with Freexian) and
extended unschroot
to support directory-based chroots with overlayfs. This is a feature that was
lost in transitioning from sbuild’s schroot backend to its unshare backend.
unschroot implements the schroot API just enough to be usable with sbuild
and otherwise works a lot like the unshare backend. As a result,
apt.postgresql.org now performs its
builds contained in a user namespace.
Helmut looked into a fair number of rebootstrap failures most of which
related to musl or gcc-15 and imported patches or workarounds to make those
builds proceed.
Helmut updated dumat
to use sqop fixing earlier PGP verification problems thanks to Justus Winter
and Neal Walfield explaining a lot of sequoia at MiniDebConf Hamburg.
Helmut got the previous zutils update for /usr-move wrong again and had to
send another update.
Helmut looked into why debvm’s autopkgtests were flaky and with lots of
help from Paul Gevers and Michael Tokarev tracked it down to a
race condition in qemu. He updated debvm to
trigger the problem less often and also fixed a wrong dependency using
Luca Boccassi’s patch.
Santiago continued the switch to sbuild
for Salsa CI (that was stopped for some months), and has been mainly testing
linux,
since it’s a complex project that heavily customizes the pipeline. Santiago is
preparing the changes for linux to submit a MR soon.
In openssh, Colin tracked down some intermittent sshd crashes to a
root cause, and issued
bookworm and bullseye updates for CVE-2025-32728.
Colin spent some time fixing up fail2ban,
mainly reverting a patch that caused its tests to fail and would have banned
legitimate users in some common cases.
Colin backported upstream fixes for CVE-2025-48383
(django-select2) and CVE-2025-47287
(python-tornado) to unstable.
Stefano supported video streaming and recording for 2 miniDebConfs in May:
Maceió and Hamburg.
These had overlapping streams for one day, which is a first for us.
Stefano packaged the new version of python-virtualenv that includes our
patches for not including the wheel for wheel.
Stefano got all involved parties to agree (in principle) to meet at DebConf
for a mediated discussion on a dispute that was brought to the technical
committee.
Anupa coordinated the swag purchase for DebConf 25 with Juliana and Nattie.
Anupa joined the publicity team meeting for discussing the upcoming events and
BoF at DebConf 25.
Anupa worked with the publicity team to publish Bits post to welcome GSoc 2025
Interns.
We've talked about ASP .Net WebForms in the past. In this style of development, everything was event driven: click a button, and the browser sends an HTTP request to the server which triggers a series of events, including a "Button Click" event, and renders a new page.
When ASP .Net launched, one of the "features" was a lazy repaint in browsers which supported it (aka, Internet Explorer), where you'd click the button, the page would render on the server, download, and then the browser would repaint only the changed areas, making it feel more like a desktop application, albeit a laggy one.
This model didn't translate super naturally to AJAX style calls, where JavaScript updated only portions of the page. The .Net team added some hooks for it- special "AJAX enabled" controls, as well as helper functions, like __doPostBack, in the UI to generate URLs for "postbacks" to trigger server side execution. A postback is just a POST request with .NET specific state data in the body.
All this said, Chris maintains a booking system for a boat rental company. Specifically, he's a developer at a company which the boat rental company hires to maintain their site. The original developer left behind a barnacle covered mess of tangled lines and rotting hull.
Let's start with the view ASPX definition:
<script>functionbtnSave_Click()
{
if (someCondition)
{
//Trimmed for your own sanity//PostBack to Save Data into the Database.javascript:<%#getPostBack()%>;
}
else
{
returnfalse;
}
}
</script><html><body><inputtype="button"value=" Save Booking "id="btnSave"class="button"title="Save [Alt]"onclick="btnSave_Click()" /></body></html>
__doPostBack is the .NET method for generating URLs for performing postbacks, and specifically, it populates two request fields: __EVENTTARGET (the ID of the UI element triggering the event) and __EVENTARGUMENT, an arbitrary field for your use. I assume getPostBack() is a helper method which calls that. The code in btnSave_Click is as submitted, and I think our submitter may have mangled it a bit in "trimming", but I can see the goal is to ensure than when the onclick event fires, we perform a "postback" operation with some hard-coded values for __EVENTTARGET and __EVENTELEMENT.
Or maybe it isn't mangled, and this code just doesn't work?
I enjoy that the tool-tip "title" field specifies that it's "[Alt]" text, and that the name of the button includes extra whitespace to ensure that it's padded out to a good rendering size, instead of using CSS.
But we can skip past this into the real meat. How this gets handled on the server side:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
'// Trimmed more garbageIf Page.IsPostBack Then'Check if save button has been Clicked.Dim eventArg As String = Request("__EVENTARGUMENT")
Dim offset As Integer = eventArg.IndexOf("@@@@@")
If (offset > -1) Then'this is an event that we raised. so do whatever you need to here.
Save()
EndIfEndIfEndSub
From this, I conclude that getPostBack populates the __EVENTARGUMENT field with a pile of "@", and we use that to recognize that the save button was clicked. Except, and this is the important thing, if they populated the ID property with btnSave, then ASP .Net would automatically callbtnSave_Click. The entire point of the __doPostBack functionality is that it hooks into the event handling pattern and acts just like any other postback, but lets you have JavaScript execute as part of sending the request.
The entire application is a boat with multiple holes in it; it's taking on water and going down, and like a good captain, Chris is absolutely not going down with it and looking for a lifeboat.
Chris writes:
The thing in its entirety is probably one of the biggest WTFs I've ever had to work with.
I've held off submitting because nothing was ever straight forward enough to be understood without posting the entire website.
Honestly, I'm still not sure I understand it, but I do hate it.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Steve Smith, Staff Writer Lewis got the assistant at a regifting exchange at the company Christmas party. He didn’t turn it on until February when a snowstorm kept him working from home for a week. It had been opened before, the setup was already complete, but it asked for his name, and gleaned network […]
Eddie's company hired a Highly Paid Consultant to help them retool their systems for a major upgrade. Of course, the HPC needed more and more time, and the project ran later and later and ended up wildly over budget, so the HPC had to be released, and Eddie inherited the code.
What followed was a massive crunch to try and hit absolutely hard delivery dates. Management didn't want their team "rewriting" the expensive code they'd already paid for, they just wanted "quick fixes" to get it live. Obviously, the HPC's code must be better than theirs, right?
After release, a problem appeared in one of their sales related reports. The point-of-sale report was meant to deliver a report about which items were available at any given retail outlet, in addition to sales figures. Because their business dealt in a high volume of seasonal items, every quarter the list of items was expected to change regularly.
The users weren't seeing the new items appear in the report. This didn't make very much sense- it was a report. The data was in the database. The report was driven by a view, also in the database, which clearly was returning the correct values? So the bug must be in the code which generated the report…
First, it's worth noting that inside of the results grid display item, the HPC named the field FColumn12, which is such a wonderfully self documenting name, I'm surprised we aren't all using that everywhere. But the more obvious problem is that the list of possible items is hard-coded into the report; items which don't fit one of these if statements don't get displayed.
At no point, did the person writing this see the pattern of "I check if a field equals a string, and then set another field equal to that string," and say, "maybe there's a better way?" At no point, in the testing process, did anyone try this report with a new item?
It was easy enough for Eddie to change the name of the column in the results grid, and replace all this code with a simpler: grdResults.Columns.FromKey("POSItem").Header.Caption = POSItemDesc, which also had the benefit of actually working, but we're all left puzzling over why this happened in the first place. It's not like the HPC was getting paid per line of code. Right? Right?
Of course not- no HPC would willingly be paid based on any metric that has an objective standard, even if the metric is dumb.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Julian Miles, Staff Writer In a dusty corridor away from busy areas of Area 702, two people with ill-fitting lab coats concealing their uniforms are huddled under a disconnected monitoring camera. One takes a hit on a vape stick. The other lights a cigar. “I heard old Kendrix panicked after Prof Devensor collapsed. Nobody […]
It has been nearly five months now since I published my open letter to Democratic candidates and organizations. Since then I have, unsurprisingly, received dozens of texts and emails asking me to "Donate $5 now!" For a while I responded to every one pointing them to my Open Letter and asking them to read it. I was expecting (hoping for?) one of three responses. 1) "You are
This was my hundred-thirty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4168-1] openafs security update of three CVEs related to theft of credentials, crashes or buffer overflows.
[DLA 4196-1] kmail-account-wizard security update to fix one CVE related to a man-in-the-middle attack when using http instead of https to get some configuration.
[DLA 4198-1] espeak-ng security update to fix five CVEs related to buffer overflow or underflow in several functions and a floating point exception. Thanks to Samuel Thibault for having a look at my debdiff.
[#1106867] created Bookworm pu-bug for kmail-account-wizard. Thanks to Patrick Franz for having a look at my debdiff.
I also continued my to work on libxmltok and suricata. This month I also had to do some support on seger, for example to inject packages newly needed for builds.
Debian ELTS
This month was the eighty-second ELTS month. During my allocated time I uploaded or worked on:
[ELA-1444-1] kmail-account-wizard security update to fix two CVEs in Buster related to a man-in-the-middle attack when using http instead of https to get some configuration. The other issue is about a misleading UI, in which the state of encryption is shown wrong.
[ELA-1445-1] espeak-ng security update to fix five CVEs in Stretch and Buster. The issues are related to buffer overflow or underflow in several functions and a floating point exception.
All packages I worked on have been on the list of longstanding packages. For example espeak-ng has been on this list for more than nine month. I now understood that there is a reason why packages are on this list. Some parts of the software have been almost completely reworked, so that the patches need a “reverse” rework. For some packages this is easy, but for others this rework needs quite some time. I also continued to work on libxmltok and suricata.
Debian Printing
Unfortunately I didn’t found any time to work on this topic.
Thanks a lot to the Release Team who quickly handled all my unblock bugs!
FTP master
It is this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So I enjoy this period and basically just take care of kernels or other important packages. As people seem to be more interested in discussions than in fixing RC bugs, my period of rest seems to continue for a while. So thanks for all this valuable discussions and really thanks to the few people who still take care of Trixie. This month I accepted 146 and rejected 10 packages. The overall number of packages that got accepted was 147.
Author: Simon Read To: All staff RE: Causality Protocol De-prioritisation Null/null/null, 00:00 This communication serves as formal notice. Treisman Industries no longer operates under linear temporal constraints. All protocols reliant upon fixed sequencing have been deprecated. Causality is to be regarded as a legacy framework, maintained only where local perception demands continuity. Departments previously dependent […]
My Debian contributions this month were all
sponsored by
Freexian. Things were a bit quieter than usual, as for the most part I was
sticking to things that seemed urgent for the upcoming trixie release.
After my appeal for help last month to
debug intermittent sshd crashes, Michel
Casabona helped me put together an environment where I could reproduce it,
which allowed me to track it down to a root
cause and fix it. (I
also found a misuse of
strlcpy affecting at
least glibc-based systems in passing, though I think that was unrelated.)
I backported fixes for some security vulnerabilities to unstable (since
we’re in freeze now so it’s not always appropriate to upgrade to new
upstream versions):
Recently someone in our #remotees channel at work asked about WFH setups and given quite a few things changed in mine, I thought it's time to post an update.
But first, a picture!
(Yes, it's cleaner than usual, how could you tell?!)
desk
It's still the same Flexispot E5B, no change here. After 7 years (I bought mine in 2018) it still works fine.
If I'd have to buy a new one, I'd probably get a four-legged one for more stability (they got quite affordable now), but there is no immediate need for that.
chair
It's still the IKEA Volmar. Again, no complaints here.
hardware
Now here we finally have some updates!
laptop
A Lenovo ThinkPad X1 Carbon Gen 12, Intel Core Ultra 7 165U, 32GB RAM, running Fedora (42 at the moment).
It's connected to a Lenovo ThinkPad Thunderbolt 4 Dock. It just worksâ„¢.
workstation
It's still the P410, but mostly unused these days.
monitor
An AOC U2790PQU 27" 4K. I'm running it at 150% scaling, which works quite decently these days (no comparison to when I got it).
speakers
As the new monitor didn't want to take the old Dell soundbar, I have upgraded to a pair of Alesis M1Active 330 USB.
It's not a Shure, for sure, but does the job well and Christian was quite satisfied with the results when we recorded the Debian and Foreman specials of Focus on Linux.
keyboard
It's still the ThinkPad Compact USB Keyboard with TrackPoint.
I had to print a few fixes and replacement parts for it, but otherwise it's doing great.
Replacement feet, because I broke one while cleaning the keyboard.
USB cable clamp, because it kept falling out and disconnecting.
Seems Lenovo stopped making those, so I really shouldn't break it any further.
mouse
Logitech MX Master 3S. The surface of the old MX Master 2 got very sticky at some point and it had to be replaced.
other
notepad
I'm still terrible at remembering things, so I still write them down in an A5 notepad.
whiteboard
I've also added a (small) whiteboard on the wall right of the desk, mostly used for long term todo lists.
coaster
Turns out Xeon-based coasters are super stable, so it lives on!
yubikey
Yepp, still a thing. Still USB-A because... reasons.
headphones
Still the Bose QC25, by now on the third set of ear cushions, but otherwise working great and the odd 15€ cushion replacement does not justify buying anything newer (which would have the same problem after some time, I guess).
I did add a cheap (~10€) Bluetooth-to-Headphonejack dongle, so I can use them with my phone too (shakes fist at modern phones).
And I do use the headphones more in meetings, as the Alesis speakers fill the room more with sound and thus sometimes produce a bit of an echo.
charger
The Bose need AAA batteries, and so do some other gadgets in the house, so there is a technoline BC 700 charger for AA and AAA on my desk these days.
light
Yepp, I've added an IKEA Tertial and an ALDI "face" light.
No, I don't use them much.
KVM switch
I've "built" a KVM switch out of an USB switch, but given I don't use the workstation that often these days, the switch is also mostly unused.
Author: Colin Jeffrey As the sentient slime mould squelched slowly across the asteroid it lived on, it found its mind – such as it was – occupied by a single thought: Ludwig van Beethoven. This was strange for several reasons, most obvious being that slime moulds are not renowned for their thoughts on music. Or […]
I'll avoid political hollering, this weekend. Especially as we're all staring with bemusement, terror -- and ideally popcorn -- at the bizarre displays of toddler-hysteria foaming from D.C. In fact, some wise heads propose that we respond by rebuilding our institutions - and confidence - from the ground up. And hence:
#3 Also resilience related! As a member of CERT - the nationwide Community Emergency Response Team I urge folks to consider taking the training. As a bottom-level 'responder' at least you'll know some things to do, if needed. *
*The FEMA site has been experiencing... 'problems'... but I hope this link works. Fortunately the training is mostly done by local fire departments, but your badge and gear may come slower than normal.
#4 Giving blood regularly may not just be saving the lives of other people, it could also be improving your own blood's health at a genetic level, according to a new study. An international team of researchers compared samples from 217 men who had given blood more than 100 times in their lives, to samples from 212 men who had donated less than 10 times, to look for any variance in blood health. "Activities that put low levels of stress on blood cell production allow our blood stem cells to renew and we think this favors mutations that further promote stem cell growth rather than disease." (Well, I just gave my 104th pint, so…)
#5 Nothing prepares you for the future better than Science Fiction! I started an online org TASAT as a way for geeky SF readers to maybe someday save the world!
...And now let's get to science! After a couple of announcements...
== Yeah, you may have heard this already, but... ==
Okay it's just a puff piece...that I can't resist sharing with folks, about an honor from my alma mater, Caltech. It's seldom that I get Imposter's Syndrome. But in this case, well, innumerable classmates there were way smarter than me!
Also a couple of job announcements: First, Prof. Ted Parson and other friends at UCLA Law School are looking for a project director at UCLA’s new Emmett Institute on Climate Change and the Environment, with a focus on legal and social aspects of ‘geo-engineering’… the wide range of proposals (from absurd to plausibly helpful) to perhaps partially ease or palliate the effects of human-generated greenhouse pollution on the planet’s essential and life-giving balance.
To see some such proposals illustrated in fiction, look at Kim Stanley Robinson’s The Ministry For the Future(spreading cooling atmospheric aerosols) or in my own novel Earth (ocean fertilization.)
And now… science forges ahead!
== Life… as we now know it… ==
Complexity can get… complicated and nowhere more so than in braaaains! For some years, the most intricate nervous systems ever modeled by science were varieties of worms or nematodes (e.g. C.elegans). But advances accelerate, and now a complete brain model – not just of neurons but their detectable connections (synapses) has been completed for the vastly larger brain of the Drosophila fruit fly!(Including discovery of several new types of neurons.)
And sure, I maintain that neurons and synapses aren’t enough. We’re gonna need to understand the murky, non-linear contributions of intra-cellular ‘computational’ elements. Still… amazing stuff. And the process will get a lot faster.
Meanwhile… Allorecognition in nature is an individual creature’s distinction between self and other. Most generally in immune response to invasion of the self-boundary by that which is non-self. Almost all Earthly life forms exhibit this trait, with stong tenacity. An exception, described in the early 20202, is Mnemiopsis or the “sea walnut,” a kind of comb jelly (‘jellyfish’) that can be divided arbitrarily and combine with other partial mnemiopses, merging into a new whole.
“LUCA, a common ancestor to all organisms and not the first life form, has been a controversial topic. Fossil evidence goes back as far as 3.4 billion years, yet this study proposes that LUCA might be close to being the same age as the Earth. The genetic code and DNA replication, which are two of the vital biological processes, might have developed almost immediately after the planet was formed.”
== Weird Earth life! ==
Sea Robins have the body of a fish, the wings of a bird, and multiple legs like a crab, in what appears to be another case of “carcinization” – life constantly re-inventing the crab body plan. Like the Qheuens in Brightness Reef. And yeah, it seems likely that the most common form of upper complex life we’ll find out there will look like crabs.
In 1987, a group of killer whales off the northwestern coast of North America briefly donned salmon “hats,” carrying dead fish on their heads for weeks. Recently, a male orca known as J27, or “Blackberry,” was photographed in Washington’s Puget Sound wearing a salmon on his head.
(I’m tempted to cite Vladimir Sorokin’s chilling/terrific short scifi novel – in a league with Orwell – Day of The Oprichnik – in which the revived czarist Oprachina regime-enforcers go about town each day with a dog’s head on the roofs of their cars, and all traffic veers aside for them, as in olden times. (“That is your association, this time, Brin?” Hey, it’s the times. And a truly great - and terrifying - novel.)
Beyond life and death... Researchers found that skin cells extracted from deceased frog embryos were able to adapt to the new conditions of a petri dish in a lab, spontaneously reorganizing into multicellular organisms called xenobots. These organisms exhibited behaviors that extend far beyond their original biological roles. Specifically, these xenobots use their cilia – small, hair-like structures – to navigate and move through their surroundings, whereas in a living frog embryo, cilia are typically used to move mucus.
Analysis of 700 genomes of bacteria, archaea, and fungi -- excluding eukaryotes such as plants and animals that evolved later -- have found 57 gene families… though I think using modern genetic drift rates to converge those families backward may be a bit iffy. Still, if life started that early… and survived the Thea impact… then it implies that life starts very easily, and may be vastly pervasive in the universe.
And possibly even bigger news. Genes themselves may compete with each other like individual entities, in somewhat predictable ways: “…interactions between genes make aspects of evolution somewhat predictable and furthermore, we now have a tool that allows us to make those predictions…”.
== And maybe beef should be a... condiment? ==
“Today, almost half the world’s habitable land is used for agriculture. Of that, an astounding 80% is dedicated to livestock grazing and animal feed. This means 40% of the planet’s total habitable land is dedicated to animal products, despite the fact that meat, dairy and farmed fish combined provide just 17% of humanity’s calories. “Only a fraction of agricultural land (16%) is used to grow the crops that we eat directly, with an additional 4% for things like biofuels, textiles and tobacco. Just 38% of habitable land is forested, a slice of the pie that continues to shrink, primarily in diverse tropical regions where the greatest number of species live.”
Meanwhile.... This article talks about new ways to make food “from thin air.” Or, more accurately, ‘precision fermentation’ from hydrogen and human and agricultural waste.
== And finally...
An interesting interview with genetic paleontologist David Reich. 60,000 years ago the explosion of modern homo sapiens from Africa seemed to happen almost overnight.
As Reich points out, we had two new things. 1. Dogs and 2. an ability to reprogram ourselves culturally.
There followed - at an accelerating pace - a series of revolutions in our tool sets, cultural patterns and adaptability. Of course, I talked about this extensively in both Earth and Existence.
Welcome to our 5th report from the Reproducible Builds project in 2025! Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please do visit the Contribute page on our website.
Security audit of Reproducible Builds tools published
The Open Technology Fund’s (OTF) security partner Security Research Labs recently an conducted audit of some specific parts of tools developed by Reproducible Builds. This form of security audit, sometimes called a “whitebox� audit, is a form testing in which auditors have complete knowledge of the item being tested. They auditors assessed the various codebases for resilience against hacking, with key areas including differential report formats in diffoscope, common client web attacks, command injection, privilege management, hidden modifications in the build process and attack vectors that might enable denials of service.
The audit focused on three core Reproducible Builds tools: diffoscope, a Python application that unpacks archives of files and directories and transforms their binary formats into human-readable form in order to compare them; strip-nondeterminism, a Perl program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging; and reprotest, a Python application that builds source code multiple times in various environments in order to to test reproducibility.
[Colleagues] approached me to talk about a reproducibility issue they’d been having with some R code. They’d been running simulations that rely on generating samples from a multivariate normal distribution, and despite doing the prudent thing and using set.seed() to control the state of the random number generator (RNG), the results were not computationally reproducible. The same code, executed on different machines, would produce different random numbers. The numbers weren’t “just a little bit different� in the way that we’ve all wearily learned to expect when you try to force computers to do mathematics. They were painfully, brutally, catastrophically, irreproducible different. Somewhere, somehow, something broke.
present attestable builds, a new paradigm to provide strong source-to-binary correspondence in software artifacts. We tackle the challenge of opaque build pipelines that disconnect the trust between source code, which can be understood and audited, and the final binary artifact, which is difficult to inspect. Our system uses modern trusted execution environments (TEEs) and sandboxed build containers to provide strong guarantees that a given artifact was correctly built from a specific source code snapshot. As such it complements existing approaches like reproducible builds which typically require time-intensive modifications to existing build configurations and dependencies, and require independent parties to continuously build and verify artifacts.
The authors compare “attestable builds� with reproducible builds by noting an attestable build requires “only minimal changes to an existing project, and offers nearly instantaneous verification of the correspondence between a given binary and the source code and build pipeline used to construct it�, and proceed by determining that t�he overhead (42 seconds start-up latency and 14% increase in build duration) is small in comparison to the overall build time.�
Timo Pohl, Pavel Novák, Marc Ohm and Michael Meier have published a paper called Towards Reproducibility for Software Packages in Scripting Language Ecosystems. The authors note that past research into Reproducible Builds has focused primarily on compiled languages and their ecosystems, with a further emphasis on Linux distribution packages:
However, the popular scripting language ecosystems potentially face unique issues given the systematic difference in distributed artifacts. This Systemization of Knowledge (SoK) [paper] provides an overview of existing research, aiming to highlight future directions, as well as chances to transfer existing knowledge from compiled language ecosystems. To that end, we work out key aspects in current research, systematize identified challenges for software reproducibility, and map them between the ecosystems.
Ultimately, the three authors find that the literature is “sparse�, focusing on few individual problems and ecosystems, and therefore identify space for more critical research.
Distribution work
In Debian this month:
Ian Jackson filed a bug against the debian-policy package in order to delve into an issue affecting Debian’s support for cross-architecture compilation, multiple-architecture systems, reproducible builds’ SOURCE_DATE_EPOCH environment variable and the ability to recompile already-uploaded packages to Debian with a new/updated toolchain (binNMUs). Ian identifies a specific case, specifically in the libopts25-dev package, involving a manual page that had interesting downstream effects, potentially affecting backup systems. The bug generated a large number of replies, some of which have references to similar or overlapping issues, such as this one from 2016/2017.
There is now a “Reproducibility Status� link for each app on f-droid.org, listed on every app’s page. Our verification server shows ✔�� or 💔 based on its build results, where ✔�� means our rebuilder reproduced the same APK file and 💔 means it did not. The IzzyOnDroid repository has developed a more elaborate system of badges which displays a ✅ for each rebuilder. Additionally, there is a sketch of a five-level graph to represent some aspects about which processes were run.
Hans compares the approach with projects such as Arch Linux and Debian that “provide developer-facing tools to give feedback about reproducible builds, but do not display information about reproducible builds in the user-facing interfaces like the package management GUIs.�
Arnout Engelen of the NixOS project has been working on reproducing the minimal installation ISO image. This month, Arnout has successfully reproduced the build of the minimal image for the 25.05 release without relying on the binary cache. Work on also reproducing the graphical installer image is ongoing.
In openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 295, 296 and 297 to Debian:
Don’t rely on zipdetails’ --walk argument being available, and only add that argument on newer versions after we test for that. […]
Review and merge support for NuGet packages from Omair Majid. […]
Merge support for an lzma comparator from Will Hollywood. […][…]
Chris also merged an impressive changeset from Siva Mahadevan to make disorderfs more portable, especially on FreeBSD. disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues […]. This was then uploaded to Debian as version 0.6.0-1.
Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 296 […][…] and 297 […][…], and disorderfs to version 0.6.0 […][…].
Website updates
Once again, there were a number of improvements made to our website this month including:
Incorporated a number of fixes for the JavaScript SOURCE_DATE_EPOCH snippet from Sebastian Davis, which did not handle non-integer values correctly. […]
Remove the JavaScript example that uses a ‘fixed’ timezone on the SOURCE_DATE_EPOCH page. […]
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility.
However, Holger Levsen posted to our mailing list this month in order to bring a wider awareness to funding issues faced by the Oregon State University (OSU) Open Source Lab (OSL). As mentioned on OSL’s public post, “recent changes in university funding makes our current funding model no longer sustainable [and that] unless we secure $250,000 in committed funds, the OSL will shut down later this year�. As Holger notes in his post to our mailing list, the Reproducible Builds project relies on hardware nodes hosted there. Nevertheless, Lance Albertson of OSL posted an update to the funding situation later in the month with broadly positive news.
Migrating the central jenkins.debian.net server AMD Opteron to Intel Haswell CPUs. Thanks to IONOS for hosting this server since 2012.
After testing it for almost ten years, the i386 architecture has been dropped from tests.reproducible-builds.org. This is because that, with the upcoming release of Debian trixie, i386 is no longer supported as a ‘regular’ architecture — there will be no official kernel and no Debian installer for i386 systems. As a result, a large number of nodes hosted by Infomaniak have been retooled from i386 to amd64.
Another node, ionos17-amd64.debian.net, which is used for verifying packages for all.reproduce.debian.net (hosted by IONOS) has had its memory increased from 40 to 64GB, and the number of cores doubled to 32 as well. In addition, two nodes generously hosted by OSUOSL have had their memory doubled to 16GB.
Lastly, we have been granted access to more riscv64 architecture boards, so now we have seven such nodes, all with 16GB memory and 4 cores that are verifying packages for riscv64.reproduce.debian.net. Many thanks to PLCT Lab, ISCAS for providing those.
Outside of this, a number of smaller changes were also made by Holger Levsen:
Fix a (harmless) typo in the multiarch_versionskew script. […]
In addition, Jochen Sprickerhof made a series of changes related to reproduce.debian.net:
Add out of memory detection to the statistics page. […]
Reverse the sorting order on the statistics page. […][…][…][…]
Improve the spacing between statistics groups. […]
Update a (hard-coded) line number in error message detection pertaining to a debrebuild line number. […]
Support Debian unstable in the rebuilder-debian.sh script. […]…]
Rely on rebuildctl to sync only ‘arch-specific’ packages. […][…]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:
0xFFFF: Use SOURCE_DATE_EPOCH for date in manual pages.
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
OpenAI just published its annual report on malicious uses of AI.
By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.
These operations originated in many parts of the world, acted in many different ways, and focused on many different targets. A significant number appeared to originate in China: Four of the 10 cases in this report, spanning social engineering, covert influence operations and cyber threats, likely had a Chinese origin. But we’ve disrupted abuses from many other countries too: this report includes case studies of a likely task scam from Cambodia, comment spamming apparently from the Philippines, covert influence attempts potentially linked with Russia and Iran, and deceptive employment schemes.
Reports like these give a brief window into the ways AI is being used by malicious actors around the world. I say “brief” because last year the models weren’t good enough for these sorts of things, and next year the threat actors will run their AI models locally—and we won’t have this kind of visibility.
... London!
This week, we're showcasing some multiple submissions from two
regular participants who fell into the theme. Everybody else is
just going to have to wait for their turn next week.
Frist up it's
Daniel D.
"I wanted to see events for the dates I would be in London. Is
Skiddle (the website in question) telling me I should come to
London more often?" They're certainly being very generous with their interpretation of dates.
But wait, there's more! Daniel follows with a variation:
"Skiddle here again - let's choose June 7th to June 14th, but Skiddle knows
better and sets the dates to June 6th to June 13th."
"I was not aware the Berlin to London route passes through Hawaii
(which is
Mokulele'shome turf)" chuckles our old friend
Michael R.
He seems to believe it's
an Error'd but I think the real WTF is simply the
Byzantine tapestry of partnerships, resellers, rebranding,
whitelabeling and masquerades in the air transport biz.
"Maybe it's just a Monday morning thing," he reports
from the airport.
But Monday had everybody troubled, and
Michael was already thinking of Friday.
"I am so sure I took the Circle Line just last Friday.
And the other lines have the option Monday-Friday/Saturday/Sunday."
I hope there isn't a subtext here.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Naomi Klouda Snow fell on Alaska, and we celebrated. We swirled in a circle, tasting flakes of sky. “Kelp brew for everyone, even the children!” Jenna Ben shouted. How we celebrated! Three circles switched hands and partners aboard our oil platform’s broken asphalt. Sky poured in billowy pieces, turning the tarmac white – the […]
The Two Cultures is a term first used by C.P. Snow in a 1959
speech and monograph focused on the split between humanities and the
sciences. Decades later, the term was (quite famously) re-used by Leo
Breiman in a (somewhat prophetic) 2001
article about the split between ‘data models’ and ‘algorithmic
models’. In this note, we argue that statistical computing practice and
deployment can also be described via this Two Cultures
moniker.
Referring to the term linking these foundational pieces is of course
headline bait. Yet when preparing for the discussion of r2u in the invited talk in
Mons (video,
slides),
it occurred to me that there is in fact a wide gulf between two
alternative approaches of using R and, specifically,
deploying packages.
On the one hand we have the approach described by my friend Jeff as “you go to the Apple store,
buy the nicest machine you can afford, install what you need and
then never ever touch it�. A computer / workstation / laptop is
seen as an immutable object where every attempt at change may
lead to breakage, instability, and general chaos—and is hence best
avoided. If you know Jeff, you know he exaggerates. Maybe only slightly
though.
Similarly, an entire sub-culture of users striving for
“reproducibility� (and sometimes also “replicability�) does the same.
This is for example evidenced by the popularity of package renv by Rcpp collaborator and pal Kevin. The expressed hope is
that by nailing down a (sub)set of packages, outcomes are constrained to
be unchanged. Hope springs eternal, clearly. (Personally, if need be, I
do the same with Docker containers and their respective
Dockerfile.)
On the other hand, ‘rolling’ is fundamentally different approach. One
(well known) example is Google building “everything at @HEAD�. The entire (ginormous)
code base is considered as a mono-repo which at any point in
time is expected to be buildable as is. All changes made are pre-tested
to be free of side effects to other parts. This sounds hard, and likely
is more involved than an alternative of a ‘whatever works’ approach of
independent changes and just hoping for the best.
Another example is a rolling (Linux) distribution as for example Debian. Changes are first committed to
a ‘staging’ place (Debian calls this the ‘unstable’ distribution) and,
if no side effects are seen, propagated after a fixed number of days to
the rolling distribution (called ‘testing’). With this mechanism,
‘testing’ should always be installable too. And based on the rolling
distribution, at certain times (for Debian roughly every two years) a
release is made from ‘testing’ into ‘stable’ (following more elaborate
testing). The released ‘stable’ version is then immutable (apart from
fixes for seriously grave bugs and of course security updates). So this
provides the connection between frequent and rolling updates, and
produces immutable fixed set: a release.
This Debian approach has been influential for any other
projects—including CRAN as can
be seen in aspects of its system providing a rolling set of curated
packages. Instead of a staging area for all packages, extensive tests
are made for candidate packages before adding an update. This aims to
ensure quality and consistence—and has worked remarkably well. We argue
that it has clearly contributed to the success and renown of CRAN.
Now, when accessing CRAN
from R, we fundamentally have
two accessor functions. But seemingly only one is widely known
and used. In what we may call ‘the Jeff model’, everybody is happy to
deploy install.packages() for initial
installations.
One of my #rstats coding rituals is that every time I load a @vincentab.bsky.social package
I go check for a new version because invariably it’s been updated with
18 new major features 😆
And that is why we have two cultures.
Because some of us, yours truly included, also use
update.packages() at recurring (frequent !!) intervals:
daily or near-daily for me. The goodness and, dare I say, gift of
packages is not limited to those by my pal Vincent. CRAN updates all the time, and
updates are (generally) full of (usually excellent) changes, fixes, or
new features. So update frequently! Doing (many but small) updates
(frequently) is less invasive than (large, infrequent) ‘waterfall’-style
changes!
But the fear of change, or disruption, is clearly pervasive. One can
only speculate why. Is the experience of updating so painful on other
operating systems? Is it maybe a lack of exposure / tutorials on best
practices?
These ‘Two Cultures’ coexist. When I delivered the talk in Mons, I
briefly asked for a show of hands among all the R users in the audience to see who
in fact does use update.packages() regularly. And maybe a
handful of hands went up: surprisingly few!
Now back to the context of installing packages: Clearly ‘only
installing’ has its uses. For continuous integration checks we generally
install into ephemeral temporary setups. Some debugging work may be with
one-off container or virtual machine setups. But all other uses may well
be under ‘maintained’ setups. So consider calling
update.packages() once in while. Or even weekly or daily.
The rolling feature of CRAN is a real benefit, and it is
there for the taking and enrichment of your statistical computing
experience.
So to sum up, the real power is to use
install.packages() to obtain fabulous new statistical
computing resources, ideally in an instant; and
update.packages() to keep these fabulous resources
current and free of (known) bugs.
For both tasks, relying on binary installations accelerates
and eases the process. And where available, using binary
installation with system-dependency support as r2u does makes it easier
still, following the r2u slogan of ‘Fast. Easy.
Reliable. Pick All Three.’ Give it a try!
Ukraine has seen nearly one-fifth of its Internet space come under Russian control or sold to Internet address brokers since February 2022, a new study finds. The analysis indicates large chunks of Ukrainian Internet address space are now in the hands of shadowy proxy and anonymity services that are nested at some of America’s largest Internet service providers (ISPs).
The findings come in a report examining how the Russian invasion has affected Ukraine’s domestic supply of Internet Protocol Version 4 (IPv4) addresses. Researchers at Kentik, a company that measures the performance of Internet networks, found that while a majority of ISPs in Ukraine haven’t changed their infrastructure much since the war began in 2022, others have resorted to selling swathes of their valuable IPv4 address space just to keep the lights on.
For example, Ukraine’s incumbent ISP Ukrtelecom is now routing just 29 percent of the IPv4 address ranges that the company controlled at the start of the war, Kentik found. Although much of that former IP space remains dormant, Ukrtelecom told Kentik’s Doug Madory they were forced to sell many of their address blocks “to secure financial stability and continue delivering essential services.”
“Leasing out a portion of our IPv4 resources allowed us to mitigate some of the extraordinary challenges we have been facing since the full-scale invasion began,” Ukrtelecom told Madory.
Madory found much of the IPv4 space previously allocated to Ukrtelecom is now scattered to more than 100 providers globally, particularly at three large American ISPs — Amazon (AS16509), AT&T (AS7018), and Cogent (AS174).
Another Ukrainian Internet provider — LVS (AS43310) — in 2022 was routing approximately 6,000 IPv4 addresses across the nation. Kentik learned that by November 2022, much of that address space had been parceled out to over a dozen different locations, with the bulk of it being announced at AT&T.
IP addresses routed over time by Ukrainian provider LVS (AS43310) shows a large chunk of it being routed by AT&T (AS7018). Image: Kentik.
Ditto for the Ukrainian ISP TVCOM, which currently routes nearly 15,000 fewer IPv4 addresses than it did at the start of the war. Madory said most of those addresses have been scattered to 37 other networks outside of Eastern Europe, including Amazon, AT&T, and Microsoft.
The Ukrainian ISP Trinity (AS43554) went offline in early March 2022 during the bloody siege of Mariupol, but its address space eventually began showing up in more than 50 different networks worldwide. Madory found more than 1,000 of Trinity’s IPv4 addresses suddenly appeared on AT&T’s network.
Why are all these former Ukrainian IP addresses being routed by U.S.-based networks like AT&T? According to spur.us, a company that tracks VPN and proxy services, nearly all of the address ranges identified by Kentik now map to commercial proxy services that allow customers to anonymously route their Internet traffic through someone else’s computer.
From a website’s perspective, the traffic from a proxy network user appears to originate from the rented IP address, not from the proxy service customer. These services can be used for several business purposes, such as price comparisons, sales intelligence, web crawlers and content-scraping bots. However, proxy services also are massively abused for hiding cybercrime activity because they can make it difficult to trace malicious traffic to its original source.
IPv4 address ranges are always in high demand, which means they are also quite valuable. There are now multiple companies that will pay ISPs to lease out their unwanted or unused IPv4 address space. Madory said these IPv4 brokers will pay between $100-$500 per month to lease a block of 256 IPv4 addresses, and very often the entities most willing to pay those rental rates are proxy and VPN providers.
A cursory review of all Internet address blocks currently routed through AT&T — as seen in public records maintained by the Internet backbone provider Hurricane Electric — shows a preponderance of country flags other than the United States, including networks originating in Hungary, Lithuania, Moldova, Mauritius, Palestine, Seychelles, Slovenia, and Ukraine.
AT&T’s IPv4 address space seems to be routing a great deal of proxy traffic, including a large number of IP address ranges that were until recently routed by ISPs in Ukraine.
Asked about the apparent high incidence of proxy services routing foreign address blocks through AT&T, the telecommunications giant said it recently changed its policy about originating routes for network blocks that are not owned and managed by AT&T. That new policy, spelled out in a February 2025 update to AT&T’s terms of service, gives those customers until Sept. 1, 2025 to originate their own IP space from their own autonomous system number (ASN), a unique number assigned to each ISP (AT&T’s is AS7018).
“To ensure our customers receive the best quality of service, we changed our terms for dedicated internet in February 2025,” an AT&T spokesperson said in an emailed reply. “We no longer permit static routes with IP addresses that we have not provided. We have been in the process of identifying and notifying affected customers that they have 90 days to transition to Border Gateway Protocol routing using their own autonomous system number.”
Ironically, the co-mingling of Ukrainian IP address space with proxy providers has resulted in many of these addresses being used in cyberattacks against Ukraine and other enemies of Russia. Earlier this month, the European Union sanctioned Stark Industries Solutions Inc., an ISP that surfaced two weeks before the Russian invasion and quickly became the source of large-scale DDoS attacks and spear-phishing attempts by Russian state-sponsored hacking groups. A deep dive into Stark’s considerable address space showed some of it was sourced from Ukrainian ISPs, and most of it was connected to Russia-based proxy and anonymity services.
According to Spur, the proxy service IPRoyal is the current beneficiary of IP address blocks from several Ukrainian ISPs profiled in Kentik’s report. Customers can chose proxies by specifying the city and country they would to proxy their traffic through. Image: Trend Micro.
Spur’s Chief Technology Officer Riley Kilmer said AT&T’s policy change will likely force many proxy services to migrate to other U.S. providers that have less stringent policies.
“AT&T is the first one of the big ISPs that seems to be actually doing something about this,” Kilmer said. “We track several services that explicitly sell AT&T IP addresses, and it will be very interesting to see what happens to those services come September.”
Still, Kilmer said, there are several other large U.S. ISPs that continue to make it easy for proxy services to bring their own IP addresses and host them in ranges that give the appearance of residential customers. For example, Kentik’s report identified former Ukrainian IP ranges showing up as proxy services routed by CogentCommunications (AS174), a tier-one Internet backbone provider based in Washington, D.C.
Kilmer said Cogent has become an attractive home base for proxy services because it is relatively easy to get Cogent to route an address block.
“In fairness, they transit a lot of traffic,” Kilmer said of Cogent. “But there’s a reason a lot of this proxy stuff shows up as Cogent: Because it’s super easy to get something routed there.”
Cogent declined a request to comment on Kentik’s findings.
As I wrote in my last post, Twitter's new encrypted DM infrastructure is pretty awful. But the amount of work required to make it somewhat better isn't large.
When Juicebox is used with HSMs, it supports encrypting the communication between the client and the backend. This is handled by generating a unique keypair for each HSM. The public key is provided to the client, while the private key remains within the HSM. Even if you can see the traffic sent to the HSM, it's encrypted using the Noise protocol and so the user's encrypted secret data can't be retrieved.
But this is only useful if you know that the public key corresponds to a private key in the HSM! Right now there's no way to know this, but there's worse - the client doesn't have the public key built into it, it's supplied as a response to an API request made to Twitter's servers. Even if the current keys are associated with the HSMs, Twitter could swap them out with ones that aren't, terminate the encrypted connection at their endpoint, and then fake your query to the HSM and get the encrypted data that way. Worse, this could be done for specific targeted users, without any indication to the user that this has happened, making it almost impossible to detect in general.
This is at least partially fixable. Twitter could prove to a third party that their Juicebox keys were generated in an HSM, and the key material could be moved into clients. This makes attacking individual users more difficult (the backdoor code would need to be shipped in the public client), but can't easily help with the website version[1] even if a framework exists to analyse the clients and verify that the correct public keys are in use.
It's still worse than Signal. Use Signal.
[1] Since they could still just serve backdoored Javascript to specific users. This is, unfortunately, kind of an inherent problem when it comes to web-based clients - we don't have good frameworks to detect whether the site itself is malicious.
(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)
When Twitter[1] launched encrypted DMs a couple of years ago, it was the worst kind of end-to-end encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?
tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.
The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].
That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.
Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.
But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.
Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)
On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.
But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.
Signal doesn't have these shortcomings. Use Signal.
[1] I'll respect their name change once Elon respects his daughter
[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings
[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys
One of the key points of confusion for people unfamiliar with Java is the distinction between true object types, like Integer, and "primitive" types, like int. This is made worse by the collection types, like ArrayList, which needs to hold a true object type, but can't hold a primitive. A generic ArrayList<Integer> is valid, but ArrayList<int> won't compile. Fortunately for everyone, Java automatically "boxes" types- at least since Java 5, way back in 2004- so integerList.add(5) and int n = integerList.get(0) will both work just fine.
Somebody should have told that to Alice's co-worker, who spends a lot of code to do some type gymnastics that they shouldn't have:
try {
ps = conn.prepareStatement(SQL_GET_LOT_WORKUP_STATUSES);
ps.setLong(1, _lotId);
rs = ps.executeQuery();
while (rs.next()) {
result.add(newInteger(rs.getInt(1)));
}
}
finally {
CloseUtil.close(ps,rs);
}
// instatiate a the array
_workupStatuses = newint[result.size()];
// convert the integers to intsfor (int h=0; h<result.size(); h++) {
_workupStatuses[h] = ((Integer)result.get(h)).intValue();
}
This runs a query against the database, and then iterates across the result to populate a List type with integers, and right away we're getting into confused territory. rs.getInt returns an int primitive, which they manually box with new Integer, and stuff into the List. And look, I wouldn't really call that a WTF, but it's what they do next that leaves me scratching my head.
They initialize a private member, _workupStatuses to a new array of ints. Then they copy every integer from the result collection into the array, first by casting the get return value to Integer, then by pulling off the intValue.
In the end, this whole dance happens because Java ResultSet types open cursors on the database side and thus don't have the capacity to tell you how many rows they returned. You need to iterate across each record until it runs out of results. That's why they populate an intermediate list. Then they can check the size and create an array, but that itself is a big why. I'm not going to say that using arrays in Java is an instant anti-pattern, but it's always something to be suspicious of, especially when you're holding result sets. It's probably a premature optimization: the key performance distance is on insertions where an ArrayList may need to resize and copy its internal backing store.
My suspicion, however, is that this code falls into the category of "C programmer forced to do Java". They're comfortable with an array of integers, which is covers 90% of the data types you use in C but a dynamic, complicated data structure is horrifying to them. So they use it when they absolutely have to, and then throw it away as quickly as they can to get back to what they're familiar with.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Colin Jeffrey The newly-created Department of Temporal Dysfunction hummed with bureaucratic indifference as a voice called out across the waiting room: “Number forty-seven!” “That’s you,” the Seraphim sitting next to Quetzalcoatl said, pointing to his ticket. “You’re forty seven.” Quetzalcoatl stood up, brushed back his resplendent feathers, and followed the caller through to an […]
Internet users, software developers, academics, entrepreneurs – basically everybody is now aware of the importance of considering privacy as a core part of our online experience. User demand, and various national or regional laws, have made privacy a continuously present subject. However, how do regular people –like ourselves, in our many capacities– feel about privacy? Lukas Antoine presents a series of experiments aiming at better understanding how people throughout the world understands privacy, and when is privacy held as more or less important than security in different aspects,
Particularly, privacy is often portrayed as a value set at tension against surveillance, and particularly state surveillance, in the name of security: conventional wisdom presents the idea of privacy calculus. This is, it is often assumed that individuals continuously evaluate the costs and benefits of divulging their personal data, sharing data when they expect a positive net outcome, and denying it otherwise. This framework has been accepted for decades, and the author wishes to challenge it. This book is clearly his doctoral thesis on political sciences, and its contents are as thorough as expected in this kind of product.
The author presents three empirical studies based on cross-survey analysis. The first experiment explores the security justifications for surveillance and how they influence their support. The second one searches whether the stance on surveillance can be made dependent on personal convenience or financial cost. The third study explores whether privacy attitude is context-dependant or can be seen as a stable personality trait. The studies aim to address the shortcomings of published literature in the field, mainly, (a) the lack of comprehensive research on state surveillance, needed or better understanding privacy appreciation, (b) while several studies have tackled the subjective measure of privacy, there is a lack of cross-national studies to explain wide-ranging phenomena, (c) most studies in this regard are based on population-based surveys, which cannot establish causal relationships, (d) a seemingly blind acceptance of the privacy calculus mentioned above, with no strong evidence that it accurately measures people’s motivations for disclosing or withholding their data.
The book is full with theoretical references and does a very good job of explaining the path followed by the author. It is, though, a heavy read, and, for people not coming from the social sciences tradition, leads to the occasional feeling of being lost. The conceptual and theoretical frameworks and presented studies are thorough and clear. The author is honest in explaining when the data points at some of his hypotheses being disproven, while others are confirmed.
The aim of the book is for people digging deep into this topic. Personally, I have authored several works on different aspects of privacy, but this book did get me thinking on many issues I had not previously considered. My only complaint would be that, for the publication as part of its highly prestigious publisher, little attention has been paid to editorial aspects: sub-subsection depth is often excessive and unclear. Also, when publishing monographs based on doctoral works, it is customary to no longer refer to the work as a “thesis” and to soften some of the formal requirements such a work often has, with the aim of producing a more gentle and readable book; this book seems just like the mass-production of an (otherwise very interesting and well made) thesis work.
Digital humanities is a young–though established–field. It deals with different expressions in which digital data manipulation techniques can be applied and used to analyze subjects that are identified as belonging to the humanities. Although most often used to analyze different aspects of literature or social network analysis, it can also be applied to other humanistic disciplines or artistic expressions. Digital humanities employs many tools, but those categorized as big data are among the most frequently employed. This book samples different takes on digital humanities, with the particularity that it focuses on Ibero-American uses. It is worth noting that this book is the second in a series of four volumes, published or set to be published between 2022 and 2026. Being the output of a field survey, I perceive this book to be targeted towards fellow Digital Humanists – people interested in applying computational methods to further understand and research topics in the humanities. It is not a technical book in the sense Computer Science people would recognize as such, but several of the presented works do benefit from understanding some technical concepts.
The 12 articles (plus an introduction) that make up this book are organized in three parts:
(1) “Theoretical Framework” presents the ideas and techniques of data science (that make up the tools for handling big data), and explores how data science can contribute to literary analysis, all while noting that many such techniques are usually frowned upon in Latin America as data science “smells neoliberal”;
(2) “Methodological Issues” looks at specific issues through the lens of how they can be applied to big data, with specific attention given to works in Spanish; and
(3) “Practical Applications” analyzes specific Spanish works and communities based on big data techniques.
Several chapters treat a recurring theme: the simultaneous resistance and appropriation of big data by humanists. For example, at least three of the chapters describe the tensions between humanism (“aesthesis”) and cold, number-oriented data analysis (“mathesis”).
The analyzed works of Parts 2 and 3 are interesting and relatively easy to follow.
Some inescapable ideological gleans from several word uses – from the book’s and series’ name, which refers to the Spanish-speaking regions as “Ibero-America”, often seen as Eurocentric, in contrast with the “Latin America” term much more widely used throughout the region.
I will end with some notes about the specific versions of the book I reviewed. I read both an EPUB version and a print copy. The EPUB did not include links for easy navigation to footnotes, that is, the typographical superindexes are not hyperlinked to the location of the notes, so it is very impractical to try to follow them. The print version (unlike the EPUB) did not have an index, that is, the six pages before the introduction are missing from the print copy I received. For a book such as this one, not having an index hampers the ease of reading and referencing.
The current boom of artificial intelligence (AI) is based upon neural networks (NNs). In order for these to be useful, the network has to undergo a machine learning (ML) process: work over a series of inputs, and adjust the inner weights of the connections between neurons so that each of the data samples the network was trained on produces the right set of labels for each item. Federated learning (FL) appeared as a reaction given the data centralization power that traditional ML provides: instead of centrally controlling the whole training data, various different actors analyze disjoint subsets of data, and provide only the results of this analysis, thus increasing privacy while analyzing a large dataset. Finally, given multiple actors are involved in FL, how hard is it for a hostile actor to provide data that will confuse the NN, instead of helping it reach better performance? This kind of attack is termed a poisoning attack, and is the main focus of this paper. The authors set out to research how effective can a hyperdimensional data poisoning attack (HDPA) be to confuse a NN and cause it to misclassify both the items trained on and yet unseen items.
Data used for NN training is usually represented as a large set of orthogonal vectors, each describing a different aspect of the item, allowing for very simple vector arithmetic operations. Thus, NN training is termed as high-dimensional or hyperdimensional. The attack method described by the authors employs cosine similarity, that is, in order to preserve similarity, a target hypervector is reflected over a given dimension, yielding a cosine-similar result that will trick ML models, even if using byzantine-robust defenses.
The paper is clear, though not an easy read. It explains in detail the mathematical operations, following several related although different threat models. The authors present the results of the experimental evaluation of their proposed model, comparing it to several other well-known adversarial attacks for visual recognition tasks, over pre-labeled datasets frequently used as training data, such as MNIST, Fashion-MNIST and CIFAR-10. They show that their method is not only more effective as an attack, but falls within the same time range as other surveyed attacks.
Adversarial attacks are, all in all, an important way to advance any field of knowledge; by publishing this attack, the authors will surely spark other works to detect and prevent this kind of alteration. It is important for AI implementers to understand the nature of this field and be aware of the risks that this work, as well as others cited in it, highlight: ML will train a computer system to recognize a dataset, warts and all; efficient as AI is, if noise is allowed into the training data (particularly adversarially generated noise), the trained model might present impaired performance.
If humans and robots were to be able to roam around the same spaces, mutually recognizing each other for what they are, how would interaction be? How can we model such interactions in a way that we can reason about and understand the implications of a given behavior? This book aims at answering this question.
The book is split into two very different parts. Chapters 1 through 3 are mostly written with a philosophical angle. It starts by framing the possibility of having sentient androids exist in the same plane as humans, without them trying to pass as us or vice versa. The first chapters look at issues related to personhood, that is, how androids can be treated as valid interaction partners in a society with humans, and how interactions with them can be seen as meaningful. In doing so, several landmarks of the past 40 years in the AI field are reviewed. The issues of the “Significant Concerns” that make up a society and give it coherence and of “Personhood and Relationality”, describing how this permeates from a society into each of the individuals that make it up, the relations between them and the social objects that bring individuals closer together (or farther apart) are introduced and explained.
The second part of the book is written from a very different angle, and the change in pace took me somewhat by surprise. Each subsequent chapter presents a different angle of the “Affinity” system, a model that follows some aspects of human behavior over time and in a given space. Chapter 4 introduces the “Affinity” environment: a 3D simulated environment with simulated physical laws and characteristics, where a number of agents (30-50 is mentioned as usual) interact. Agents have a series of attributes (“value memory”), can adhere to different programs (“narratives”), and gain or lose on some vectors (“economy”). They can sense the world around them with sensors, and can modify the world or signal other agents using effectors.
The last two chapters round out the book, as expected: the first presents a set of results from analyzing a given set of value systems, and the second gives readers the conclusions reached by the author. However, I was expecting more–either having at least a link to download the “Affinity” system and continue exploring it or modifying some of the aspects it models to get it to model a set of agents with different stories and narratives, or extend it to yet unforseen behaviors, or at least have the author present a more complete comparison of results than the evaluation of patterns resulting from a given run. The author is a well-known, prolific author in the field, and I was expecting bigger insights from this book.
Nevertheless, the book is an interesting and fun read, with important insights in both the first and second parts. There is a certain lack of connection between their respective rhythms, and the second part indeed builds on the concepts introduced in the first one. Overall, I enjoyed reading the book despite expecting more.
Since December 2023, I have been publishing the reviews I write for
Computing Reviews as they get
published. I will do a slight change now: I will start pushing the reviews
to my blog as I write them, and of course, will modify them with the
final wording and to link to their place as soon as they are published. I’m
doing this because sometimes it takes very long for reviews to be approved,
and I want to share them with my blog’s readers!
So, please bear with this a bit: I’ll send a (short!) flood of my latest
four pending reviews today.
I saw this document on running DeepSeek R1 [1] and decided to give it a go. I downloaded the llama.cpp source and compiled it and downloaded the 131G of data as described. Running it with the default options gave about 7 CPU cores in use. Changing the --threads parameter to 44 caused it to use 17 CPU cores (changing it to larger numbers like 80 made it drop to 2.5 cores). I used the --n-gpu-layers parameter with the value of 1 as I currently have a GPU with only 6G of RAM (AliExpress is delaying my delivery of a PCIe power adaptor for a better GPU). Running it like this makes the GPU take 12W more power than standby and using 5.5G of VRAM according to nvidia-smi so it is doing a small amount of work, but not much. The documentation refers to the DeepSeek R1 1.58bit model which I’m using as having 61 layers so presumably less than 2% of the work is done on the GPU.
Running like this it takes 2 hours of CPU time (just over 3 minutes of elapsed time at 17 cores) to give 8 words of output. I didn’t let any tests run long enough to give complete output.
The documentation claims that it will run on CPU with 20G of RAM. In my tests it takes between 161G and 195G of RAM to run depending on the number of threads. The documentation describes running on the CPU as “very slow” which presumably means 3 words per minute on a system with a pair of E5-2699A v4 CPUs and 256G of RAM.
When I try to use more than 44 threads I get output like “system_info: n_threads = 200 (n_threads_batch = 200) / 44” and it seems that I only have a few threads actually in use. Apparently there’s some issue with having more threads than the 44 CPU cores in the system.
I was expecting this to go badly and it met my expectations in that regard. But it was interesting to see exactly how it went badly. It seems that if I had a GPU with 24G of VRAM I’d still have 54/61 layers running on the CPU so even the largest of home GPUs probably wouldn’t make much difference.
Maybe if I configured the server to have hyper-threading enabled and 88 HT cores then I could have 88 threads and about 34 CPU cores in use which might help. But even if I got the output speed from 3 to 6 words per minute that still wouldn’t be very usable.
It was October 02019, and Thea Sommerschield had hit a wall. She was working on her doctoral thesis in ancient history at Oxford, which involved deciphering Greek inscriptions that were carved on stones in Western Sicily more than 2,000 years earlier. As is often the case in epigraphy — the study and interpretation of ancient inscriptions written on durable surfaces like stone and clay — many of the texts were badly damaged. What’s more, they recorded a variety of dialects, from a variety of different periods, which made it harder to find patterns or fill in missing characters.
At a favorite lunch spot, she shared her frustrations with Yannis Assael, a Greek computer scientist who was then working full-time at Google DeepMind in London while commuting to Oxford to complete his own PhD. Assael told Sommerschield he was working with a technology that might help: a recurrent neural network, a form of artificial intelligence able to tackle complex sequences of data. They set to work training a model on digitized Greek inscriptions written before the fifth century, similar to how ChatGPT was trained on vast quantities of text available on the internet.
Sommerschield watched with astonishment as the missing text from the damaged inscriptions began to appear, character by character, on her computer screen. After this initial success, Assael suggested they build a model based on transformer technology, which weights characters and words according to context. Ithaca, as they called the new model, was able to fill in gaps in political decrees from the dawn of democracy in Athens with 62% accuracy, compared to 25% for human experts working alone. When human experts worked in tandem with Ithaca, the results were even better, with accuracy increasing to 72%.
Left: Ithaca's restoration of a damaged inscription of a decree concerning the Acropolis of Athens. Right: In August 02023, Vesuvius Challenge contestant Luke Farritor, 21, won the competition's $40,000 First Letters Prize for successfully decoding the word ΠΟΡΦΥΡΑϹ (porphyras, meaning "purple") in an unopened Herculaneum scroll. Left photo by Marsyas, Epigraphic Museum, WikiMedia CC BY 2.5. Right photo by The Vesuvius Challenge.
Ithaca is one of several ancient code-cracking breakthroughs powered by artificial intelligence in recent years. Since 02018, neural networks trained on cuneiform, the writing system of Mesopotamia, have been able to fill in lost verses from the story of Gilgamesh, the world’s earliest known epic poem. In 02023, a project known as the Vesuvius Challenge used 3D scanners and artificial intelligence to restore handwritten texts that hadn’t been read in 2,000 years, revealing previously unknown works by Epicurus and other philosophers. (The scrolls came from a luxurious villa in Herculaneum, buried during the same eruption of Mount Vesuvius that destroyed Pompeii. When scholars had previously tried to unroll them, the carbonized papyrus crumbled to dust.)
Phaistos Disk (c. 01850–01600 BCE).
Yet despite these advances, a dozen or so ancient scripts — the writing systems used to transcribe spoken language — remain undeciphered. These include such mysteries as the one-of-a-kind Phaistos Disk, a spiral of 45 symbols found on a single sixteen-inch clay disk in a Minoan palace on Crete, and Proto-Elamite, a script used 5,000 years ago in what is now Iran, which may have consisted of a thousand distinct symbols. Some, like Cypro-Minoan — which transcribes a language spoken in the Late Bronze Age on Cyprus — are tantalizingly similar to early European scripts that have already been fully deciphered. Others, like the quipu of the Andes — intricately knotted ropes made of the wool of llamas, vicuñas, and alpacas — stretch our definitions of how speech can be transformed into writing.
Inca quipu (c. 01400–01532).
In some cases, there is big money to be won: a reward of one million dollars is on offer for the decipherer of the Harappan script of the Indus Valley civilization of South Asia, as well as a $15,000-per-character prize for the successful decoder of the Oracle Bone script, the precursor to Chinese.
Cracking these ancient codes may seem like the kind of challenge AI is ideally suited to solve. After all, neural networks have already bested human champions at chess, as well as the most complex of all games, Go. They can detect cancer in medical images, predict protein structures, synthesize novel drugs, and converse fluently and persuasively in 200 languages. Given AI’s ability to find order in complex sets of data, surely assigning meaning to ancient symbols would be child’s play.
But if the example of Ithaca shows the promise of AI in the study of the past, these mystery scripts reveal its limitations. Artificial neural networks might prove a crucial tool, but true progress will come through collaboration between human neural networks: the intuitions and expertise stored in the heads of scholars, working in different disciplines in real-world settings.
“AI isn’t going to replace human historians,” says Sommerschield, who is now at the University of Nottingham. “To us, that is the biggest success of our research. It shows the potential of these technologies as assistants.” She sees artificial intelligence as a powerful adjunct to human expertise. “To be an epigrapher, you have to be an expert not just in the historical period, but also in the archaeological context, in the letter form, in carbon dating.” She cautions against overstating the potential of AI. “We’re not going to have an equivalent of ChatGPT for the ancient world, because of the nature of the data. It’s not just low in quantity, it’s also low in quality, with all kinds of gaps and problems in transliteration.”
Ithaca was trained on ancient Greek, a language we’ve long known how to read, and whose entire corpus amounts to tens of thousands of inscriptions. The AI models that have filled in lost verses of Gilgamesh are trained on cuneiform, whose corpus is even larger: hundreds of thousands of cuneiform tablets can be found in the storerooms of the world’s museums, many of them still untranslated. The problem with mystery scripts like Linear A, Cypro-Minoan, Rongorongo, and Harappan is that the total number of known inscriptions can be counted in the thousands, and sometimes in the hundreds. Not only that, in most cases we have no idea what spoken language they’re meant to encode.
Harappan script as seen on the Pashupati seal (c. 02200 BCE).
“Decipherment is kind of like a matching problem,” explains Assael. “It’s different from predicting. You’re trying to match a limited number of characters to sounds from an older, unknown language. It’s not a problem that’s well suited to these deep neural network architectures that require substantial amounts of data.”
Human ingenuity remains key. Two of the greatest intellectual feats of the 20th century involved the decipherment of ancient writing systems. In 01952, when Michael Ventris, a young English architect, announced that he’d cracked the code of Linear B, a script used in Bronze Age Crete, newspapers likened the accomplishment to the scaling of Mount Everest. (Behind the scenes, the crucial grouping and classifying of characters on 180,000 index cards into common roots — the grunt work that would now be performed by AI — was done by Alice Kober, a chain-smoking instructor from Brooklyn College.)
Illustration of a Linear B tablet from Pylos.
The decipherment of the Maya script, which is capable of recording all human thought using bulbous jaguars, frogs, warriors’ heads, and other stylized glyphs, involved a decades-long collaboration between Yuri Knorozov, a Soviet epigrapher, and American scholars working on excavations in the jungles of Central America.
While the interpreting of Egyptian hieroglyphics is held up as a triumph of human ingenuity, the Linear B and Mayan codes were cracked without the help of a Rosetta Stone to point the way. With Linear B, the breakthrough came when Ventris broke with the established thinking, which held that it transcribed Etruscan — a script scholars can read aloud, but whose meaning still remains elusive — and realized that it corresponded to a form of archaic Greek spoken 500 years before Homer. In the case of ancient Mayan, long thought to be a cartoonish depiction of universal ideas, it was only when scholars acknowledged that it might transcribe the ancestors of the languages spoken by contemporary Maya people that the decipherment really began. Today, we can read 85% of the glyphs; it is even possible to translate Shakespeare’s Hamlet into ancient Mayan.
A panel of a royal woman with Maya script visible on the sides (c. 0795).
Collaborating across cultures and disciplines, and carrying out paradigm-shedding leaps of intuition, are not the strong points of existing artificial neural networks. But that doesn’t mean AI can’t play a role in decipherment of ancient writing systems. Miguel Valério, an epigrapher at the Autonomous University of Barcelona, has worked on Cypro-Minoan, the script used on Cyprus 3,500 years ago. Two hundred inscriptions, on golden jewelry, metal ingots, ivory plaques, and four broken clay tablets, have survived. Valério was suspicious of the scholarly orthodoxy, which attributed the great diversity in signs to the coexistence of three distinct forms of the language.
To test the theory that many of the signs were in fact allographs — that is, variants, like the capital letter “G” and “g,” its lower-case version — Valério worked with Michele Corazza, a computational linguist at the University of Bologna, to design a custom-built neural network they called Sign2Vecd. Because the model was unsupervised, it searched for patterns without applying human-imposed preconceptions to the data set.
“The machine learned how to cluster the signs,” says Valério, “but it didn’t do it simply on the basis of their resemblance, but also on the specific context of a sign in relation to other signs. It allowed us to create a three-dimensional plot of the results. We could see the signs floating in a sphere, and zoom in to see their relationship to each other, and whether they’d been written on clay or metal.”
Left: Separation of Cypro-Minoan signs from clay tablets (in green) and signs found in other types of inscription (in red) in the 3D scatter plot. Right: Separation of a Cypro-Minoan grapheme in two groups in the 3D scatter plot.1
The virtual sphere allowed Valério to establish a sign-list — the equivalent of the list of 26 letters in our alphabet, and the first step towards decipherment — for Cypro-Minoan, which he believes has about 60 different signs, all corresponding to a distinct syllable.
“The issue is always validation. How do you know if the result is correct if the script is undeciphered? What we did was to compare it to a known script, the Cypriot Greek syllabary, which is closely related to Cypro-Minoan. And we found the machine got it right 70% of the time.” But Valério believes no unsupervised neural net, no matter how powerful, will crack Cypro-Minoan on its own. “I don’t see how AI can do what human epigraphers do traditionally. Neural nets are very useful tools, but they have to be directed. It all depends on the data you provide them, and the questions you ask them.”
The latest advances in AI have come at a time when there has been a revolution in our understanding of writing systems. A generation ago, most people were taught that writing was invented once, in Mesopotamia, about 5,500 years ago, as a tool of accountancy and state bureaucracy. From there, the standard thinking went, it spread to Egypt, and hieroglyphics were simplified into the alphabet that became the basis for recording most European languages. It is now accepted that writing systems were not only invented to keep track of sheep and units of grain, but also to record spiritual beliefs and tell stories. (In the case of Tifinagh, an ancient North African Berber script, there is evidence that writing was used primarily as a source of fun, for puzzle-making and graffiti.) Monogenesis, the idea that the Ur-script diffused from Mesopotamia, has been replaced by the recognition that writing was invented independently in China, Egypt, Central America, and — though this remains controversial — in the Indus Valley, where 4,000 inscriptions been unearthed in sites that were home to one of the earliest large urban civilizations.
The most spectacular example of a potential “invention of writing” is Rongorongo, a writing system found on Rapa Nui, the island famous for its massive carved stone heads. Also known as Easter Island, it is 1,300 miles from any other landmass in the South Pacific. Twenty-six tablets have been discovered, made of a kind of wood native to South Africa. Each has been inscribed, apparently using a shark’s tooth as a stylus, with lines of dancing stick-figures, stylized birds and sea creatures. The tablets were recently dated to the late 01400s, two centuries before Europeans first arrived on the island.
"A View of the Monuments of Easter Island, Rapa Nui" by William Hodges (c. 01775–01776).
For computational linguist Richard Sproat, Rongorongo may be the script AI can offer the most help in decoding. “It’s kind of a decipherer’s dream,” says Sproat, who worked on recurrent neural nets for Google, and is now part of an AI start-up in Tokyo. “There are maybe 12,000 characters, and some of the inscriptions are quite long. We know that it records a language related to the modern Rapa Nui, which Easter Islanders speak today.” Archaeologists have even reported eyewitness accounts of the ceremonies in which the tablets were inscribed. And yet, points out Sproat, even with all these head starts, and access to advanced AI, nobody has yet come close to a convincing decipherment of Rongorongo.
A photo of Rongorongo Tablet E (c. early 01880s). The tablet, given to the University of Louvain, was destroyed in a fire in 01914. From L'île de Pâques et ses mystères (Easter Island and its Mysteries) by Stéphen-Charles Chauvet (01935).
The way forward depends on finding more inscriptions, and that comes down to old-fashioned “dirt” archaeology, and the labor-intensive process of unearthing ancient artifacts. (The best-case scenario would be finding a “bilingual,” a modern version of the Rosetta Stone, whose parallel inscriptions in Greek and demotic allowed 19th-century scholars to decipher Egyptian hieroglyphics.) But the code of Cypro-Minoan, or Linear A, or the quipu of the Andes, won’t be cracked by a computer scientist alone. It’s going to take a collaboration with epigraphers working with all the available evidence, some of which is still buried at archaeological sites.
“As a scholar working in social sciences,” says Valério of the Autonomous University of Barcelona, “I feel I’m obliged to do projects in the digital humanities these days. If we pursue things that are perceived as traditional, no one is going to grant us money to work. But these traditional things are also important. In fact, they’re the basis of our work.” Without more material evidence, painstakingly uncovered, documented, and digitized, no AI, no matter how powerful, will be able to decipher the writing systems that will help us bring the lost worlds of the Indus Valley, Bronze Age Crete, and the Incan Empire back to life.
Perhaps the most eloquent defense of traditional scholarship comes from the distinguished scholar of Aegean civilization, Silvia Ferrara, who supervised Valério and Corazza’s collaboration at the University of Bologna.
“The computer is no deus ex machina,” Ferrara writes in her book The Greatest Invention (02022). “Deep learning can act as co-pilot. Without the eye of the humanist, though, you don’t stand a chance at decipherment.”
Notes
1. Figure and caption reproduced from Corazza M, Tamburini F, Valério M, Ferrara S (02022) Unsupervised deep learning supports reclassification of Bronze age cypriot writing system. PLoS ONE 17(7): e0269544 under a CC BY 4.0 license. https://doi.org/10.1371/journal.pone.0269544
Honestly, I'm surprised that it was made static. Sure, static is the correct choice for this function, at least if we're describing anything about this function as "correct". I'm still surprised. It's got an accurate name given its behavior, it's scoped correctly. It still shouldn't exist and I have no idea what lead to it existing, but that's not surprising.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Kenny O’Donnell Touchdown. The ship rumbled. The landing gear drilled into the asteroid, anchoring his one-man yacht. The asteroid, only a kilometre long and half as wide, was too small to hold a ship without anchors. His joints popped as he floated from his chair in micro-gravity. He grabbed a handle on the bulkhead […]
Author: Majoki When his son stepped through the privacy-field into his home office, Manfred began to disconnect. “You told me to come see you after I finished my homelearn session, Dad.” His son’s eyes narrowed disdainfully at the etherware bands his father removed from his head and set by the brainframe, their household’s direct link […]
A recent code-review on a new build pipeline got Sandra's attention (previously). The normally responsible and reliable developer responsible for the commit included this in their Jenkinsfile:
sh '''
if ! command -v yamllint &> /dev/null; then
if command -v apt-get &> /dev/null; then
apt-get update && apt-get install -y yamllint
elif command -v apk &> /dev/null; then
apk add --no-cache yamllint
elif command -v pip3 &> /dev/null; then
pip3 install --break-system-packages yamllint
fi
fi
find . -name '*.yaml' -exec yamllint {} \\; || true
find . -name '*.yml' -exec yamllint {} \\; || true
'''
So the goal of this script is to check to see if the yamllint command is available. If it isn't, we check if apt-get is available, and if it is, we use that to install yamllint. Failing that, we try apk, Alpine's package manager, and failing that we use pip3 to install it out of PyPI. Then we run it against any YAML files in the repo.
There are a few problems with this approach.
The first, Sandra notes, is that they don't use Alpine Linux, and thus there's no reason to try apk. The second is that this particular repository contains no Python components and thus pip is not available in the CI environment. Third, this CI job runs inside of a Docker image which already has yamllint installed.
Now, you'd think the developer responsible would have known this, given that this very merge request also included the definition of the Dockerfile for this environment. They'd already installed yamllint in the image.
Sandra writes:
This kind of sloppiness is also wildly out of character for him, to the point where my first thought was that it was AI-generated - especially since this was far from the only WTF in the submitted Jenkinsfile. Thankfully, it didn't pass code review and was sent back for intensive rework.
Finally, while the reality is that we'll always need to resolve some dependencies at build time, things like "tooling" and "linters" really belong in the definition of the build environment, not resolved at build time.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems.
[…]
“This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.”
Frederico planned to celebrate the new year with friends at the exotic international tourist haven of Molvania. When visiting the area, one could buy and use a MolvaPass (The Most Passive Way About Town!) for free or discounted access to cultural sites, public transit, and more. MolvaPasses were available for 3, 7, or 365 days, and could be bought in advance and activated later.
Still outside the country the week before his trip, Frederico had the convenience of buying a pass either online or via an app. He elected to use the website, sitting down before his home PC and entering the address into his web browser. Despite his fiber internet connection, he sat on a white screen for several seconds while the GoMolva Tourist Board website loaded. He then clicked the obvious Buy Now button in the top-right corner. After several more seconds, he was presented with a page requiring him to create an account.
Frederico did so, specifying his email address and a 16-character password suggested by Bitwarden. He then received a confirmation link in his email inbox. Upon clicking that, he was presented with an interface where he could add MolvaPasses to a shopping cart. He selected one 3-day pass and paid with PayPal. The website redirected him to the proper screen; he entered his PayPal credentials and confirmed the payment.
From there, he was redirected to a completely white screen. After waiting several seconds, a minute ... nothing changed. PayPal sent him a receipt, but there was no confirmation from the GoMolva Tourist Board website.
Frederico decided to refresh the page. This time, he saw the default Apache screen on CentOS.
His jaw almost hit the floor. They were still using CentOS, despite the fact that it'd been abandoned? Horrified, he bailed on that tab, desperately opening a fresh one and manually entering the URL again.
Finally, the page loaded successfully. Frederico was still logged in. From there, he browsed to the My Passes section. His 3-day MolvaPass was there, listed as Not activated.
This was exactly what Frederico had hoped he would see. With a sigh of relief, he turned his attention away from his laptop to his phone. For the sake of convenience, he wanted to download the MolvaPass app onto his phone. Upon doing so, he opened it and entered his username and password on the initial screen. After clicking Login, the following message appeared: The maximum length of the password is 15 characters.
Frederico's blood froze. How was that possible? There'd been no errors or warnings when he'd created his login. Everything had been fine then. Heart pounding, Frederico tried logging in again. The same error appeared. He switched back to his computer, where the site was still open. He browsed to My Account and selected Change Password.
A new screen prompted him for the old password, and a new one twice. He hurriedly filled in the fields and clikced the Change Password button.
A message appeared: Your MolvaPass has been successfully activated.
"What?!" Frederico blurted out loud. There was nothing to click but an OK button.
A follow-up message assured him, Password has been successfully changed.
As terror bolted down his spine, an expletive flew from his mouth. He navigated back to My Passes. There beside his newly-purchased pass was the big green word Activated.
"I only changed the password!" he pleaded out loud to a god who clearly wasn't listening. He forced a deep breath upon his panicked self and deliberated what to do from there. Support. Was there any way to get in touch with someone who could undo the activation or refund his money? With some Googling, Frederico found a toll-free number he could call from abroad. After he rapidly punched the number into his phone, a stilted robot voice guided him through a phone menu to the "Support" option.
"FoR MoLvaPaSs suPpOrt, uSe ThE cOnTaCt FoRm oN tHe GoMoLvA WeBzOnE." The robot hung up.
Frederico somehow refrained from hurling his phone across the room. Turning back to his PC, he scrolled down to the website footer, where he found a Contact us link. On this page, there was a contact form and an email address. Frederico filled out the contact form in detail and clicked the Submit button.
A new message appeared: Unable to send the request, try again later.
Frederico rolled his eyes toward the heavens. Somehow, he managed to wait a good five minutes before trying again—in vain. Desperately, he took his detailed message and emailed it to the support address, hoping for a quick response.
Minutes crawled past. Hours. Nothing by the time Frederico went to bed. It wasn't until the next morning that a response came back. The entire message read: The MolvaPass should have been activated once you reached Molvania, not before.
Consumed with soul-burning fury, Frederico hit Caps Lock on his keyboard. MAYBE MY PREVIOUS EMAIL WAS TOO LONG OR DIFFICULT TO UNDERSTAND?? ALL I DID WAS CHANGE THE PASSWORD!!!!
Several hours later, the following reply: The change of pw is not related to the activation of the pass.
Frederico directed his rage toward escalating the matter. He managed to track down the company that'd built the GoMolva website, writing to their support to demand a cancellation of the MolvaPass and a full refund. A few hours later, their reply asked for his PayPal transaction code so they could process the request.
In the end, Frederico got his money back and resolved to wait until he was physically in Molvania before attempting to buy another MolvaPass. We can only hope he rang in the new year with sanity intact.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Julian Miles, Staff Writer Joey looks around at the crowd. “I see we’ve some new faces tonight. Thanks for coming.” He presses his palms flat on the table. “You’ve done what each of us has done at some point in the last few years: you’ve realised there’s something deeply wrong with our world. Those […]
Another short status update of what happened on my side last
month. Larger blocks besides the Phosh 0.47 release are on screen
keyboard and cell broadcast improvements, work on separate volume
streams, the switch of phoc to wlroots 0.19.0 and effort to make
Phosh work on Debian's upcoming stable release (Trixie) out of the
box. Trixie will ship with Phosh 0.46, if you want to try out 0.47
you can fetch it from Debian's experimental suite.
Standardize audio stream roles (MR). Otherwise we'll have a hard time
with e.g. WirePlumbers role based policy linking as apps might use all kinds of types.
Reviews
This is not code by me but reviews on other peoples code. The list is
(as usual) slightly incomplete. Thanks for the contributions!
I have bought myself an expensive ARM64 workstation, the System 76 Thelio Astra that I intend to use as my main desktop computer for the next 15 years, running Debian.
The box is basically a server motherboard repurposed in a good desktop chassis. In Europe it seems you can order similar ready systems here.
The hardware is well supported by Debian 12 and Debian testing.I had some initial issues with graphics, due to the board being designed for a server use, but I am solving these as we go.
Annoyances I got so far:
When you power on the machine using the power supply switch, you have to wait for the BMC to finished its startup sequence, before the front power button does anything. As starting the BMC can take 90 seconds, I thought initially the machine was dead on arrival.
The default graphical output is redirected to the BMC Serial over LAN, which means if you want to install Debian using an attached display you need to force the output on the attached display passing console=tty0 as an installer parameter.
Finally the Xorg Nouveau driver does not work with the Nvidia A400 GPU I got with the machine.
After passing nodemodeset as a kernel parameter, I can force Xorg to use an unaccelerated framebuffer, which at least displays something. I passed this parameter to the installer, so that I could install in graphical mode.
The driver from Nvidia works, but I’d like very much to get Nouveau running.
Ugly point
A server mother board we said. This mean there is NO suspend to RAM, you have to power off if you don’t want to keep the machine on all the time.
As the boot sequence is long (server board again) I am pondering setting a startup time in the UEFI firmware to turn the box on at specific usage time.
Good points
The firmware of the machine is a standard EFI, which means you can use the debian arm64 installer on an USB stick straight away, without any kind of device tree / bootloader fiddling.
The 3 Nics, Wifi, bluetooth were all recognized on first boot.
I was afraid the machine would be loud. However it is quiet, you hear the humming of a fan, but it is quieter than most desktops I owned, from the Atari TT to an all in one Lenovo M92z I used for 10 years. I am certainly not a hardware and cooling specialist, but meseems the quietness comes from slow rotating but very large fans.
Due the clean design of Linux and Debian, thousands of packages working correctly on ARM64, starting with the Gnome desktop environment and Firefox.
The documentation from system76 is fine, their Ubuntu 20.04 setup guide was helpful to understand the needed parameters mentioned above.
Update: The display is working correctly with the nouveau driver after installing the non-free Nvidia firmware. See the Debian wiki.
Author: Aubrey Williams I’ve been looking for work for months now. After the chip company got all-new machinery, the bean-counters did a review, and I was one of the names that got a red strikethrough. I can’t live on redundancy forever, and I’m not poor enough to get a rare welfare payment, so I need […]
1. Please don't use the TACO slur. It may amuse you and irk your enemy, sure. But this particular mockery has one huge drawback. It might taunt him into not backing down ('chickening out') some time when it's really needed, in order to save all our lives. So... maybe... grow up and think tactics?
A far more effective approach is to hammer hypocrisy!
Yeah, sure. Many have tried that. Though never with the relentless consistency that cancels their tactic of changing the subject.
I've never seen it done with the kind of harsh repetitive simplicity that I recommended in Polemical Judo. Repetitive simplicity that is the tactic that the Foxites perfected! As when all GOPpers repeat the same party line all together - like KGB metronomes - all on the same morning.
And hence...
2. ... and hence, here is a litany of hypocrisy and poor memory that is capsulated enough to be shouted!
These are challenges that might reach a few of your getting-nervous uncles. especially as a combined list!
Ten years ago, Donald Trump promised proof that Barack Obama was born in Kenya.
“Soon! The case is water-tight and ready. I'll present it next week!”The same promise got repeated, week after week, month after month. And sure, his dittohead followers relished not facts, but the hate mantra, so they never kept track...
Also ten years ago Beck and Hannity etc. declared "George Soros personally toppled eight foreign governments!" (Actually, it's sort of true!) They promised to list those eight Soros-toppled victims! Only they never did. Because providing that list would have left Fox a smoldering ruin.
Nine years ago, running against H Clinton, Donald Trump declared I will build a Big Beautiful WALL!" From sea to shining sea.
Funny how he never asked his GOP-run Congress, later, for the money. And he still hasn't. Clinton and Obama each built more fences & surveillance systems to control the border than Trump ever did.
Also nine years ago,"You’ll never see me on a golf course, I’ll be working so hard for you!” Um...
Eight years ago - after inauguration and taking over the US government, he vowed: “Within weeks the indictments will roll in a great big wave. You’ll see the Obama Administration was the most corrupt ever!”
(Real world: there were zero indictments of the most honest and least blemished national administration in all of human history. Bar none. In fact, grand juries - consisting mostly of white retirees in red states - have indicted FORTY TIMES as many high Republicans as Democrats. Care to offer wager stakes?)
Also eight years ago, his 1st foreign guests in the White House - Lavrov and Kisliak, giggled with him ecstatically (see below), thinking their KGB tricks had captured the USA. Alas for Putin's lads, it took them 8 more years.
Seven years ago, ol’ Two Scoops promised a “terrific health care bill for everyone!” to replace ‘horrible Obamacare!’ And repeatedly for the next six years he declared “You’ll see it in two weeks!” And then... in 2 weeks. And then... in 2 weeks. And then in 2 weeks… twenty... fifty more times.
Also seven years ago,"Kim Jong Un and I fell in love!" (see above).
Six years ago, Fox “News” declared in court “we don’t do news, we are an entertainment company,” in order to writhe free of liability and perjury for oceans of lies. And still Fox had to pay $150 millions.
Five years ago Trump’s son-in-law was “about to seal the deal on full peace in the Middle East!”
Four years ago, Don promised “Absolute proof the election was stolen by Biden and the dems!"
Howl after howl by Foxite shills ensued, and yet, not one scintilla of credible evidence was ever presented. While blowhards and blockheads fulminated into secessionist fury, all courts – including many GOP appointed judges - dismissed every 'case' as ludicrous, and several of them fined Trumpist shriekers for frivolous lying. Oh, the screeches and spumes! But not…one…shred of actual evidence. Ever.
Three years ago, three different GOP Congressmen alluded-to or spoke-of how sex orgies are rife among top DC Republicans. And two of them alluded to resulting blackmail.
Trump demanded “release the Epstein Files!”... then filed every lawsuit that his lawyers could concoct, in order to prevent it. And to protect an ocean of NDAs.
Oh, and he promised “Great revelations!” on UFOs and the JFK assassination, just as soon as he got back in office. Remember that? Disappointed, a little? And Epstein's pal is still protected.
Two years ago, Paul Ryan and Mitt Romney and even Mitch McConnell were hinting at a major push to reclaim the Republican Party - or at least a vestigially non-traitor part of it - from the precipice where fanaticism and blackmail and treason had taken it.
If necessary - (it was said) - they would form a new, Real Rebublican Party, where a minority of decent adults remaining in the GOP 'establishment' might find refuge and begin rebuilding.
Only it seems that crown prince Ryan & co. chickened out, as he always has... RACO.
One year ago... actually less... the Economist offered this cover plus detailed stats, showing what always happens. That by the end of every Democratic administration, most things - certainly the economy and yes, deficits - are better. And they always get worse across the span of GOP admins. Care to bet this time?
Alas, now the bitterly laughingstock of the world, deliberately immolating the universities and science and professions that truly Made America Great.
There's your year-by year Top Ten Hypocricies countdown. And it's worth a try, to see if hammering the same things over and over - which worked so well for the Foxites might be worth a try?
Oh, sure. Those aren’t my paramount complaints against Putin’s lackey and his shills.
My main gripe is the one thing that unites them all -- Trump’s oligarchs with foreign enemies and with MAGA groundlings.
That one goal? Shared hatred of every single fact using profession, from science and civil service to the FBI/intel/military officer corps who won the Cold War and the War on Terror…
... the very ones standing between YOU and a return to feudal darkness.*
These reminder samplers of promises never kept are still valid. They could be effective if packaged properly, And will someone please show me who – in this wilderness – is pointing at them?
== Final lagniappe... a reminder of the most-loathesome of all... ==
* And yeah... here again in the news is the would-be Machiavelli/Wormtongue who flatter-strokes the ingrate, would-be lords who are seeking to betray the one renaissance that gave them everything they have.
Okay, I was planning to finish with a riff (again) on teleologies or notions of TIME. Very different notions that are clutched by the far-left, by today's entire right, and by the beleaguered liberal/middle.
Is there a best path to getting both individuals and societies to behave honestly and fairly?
That goal -- attaining fact-based perception -- was never much advanced by the ‘don’t lie’ commandments of finger-wagging moralists and priests.
Sure, for 6000 years, top elites preached and passed laws against lies and predation... only to become the top liars and self-deceivers, bringing calamities down upon the nations and peoples that they led.
Laws can help. But the ’essential trick’ that we’ve gradually become somewhat good-at is reciprocal accountability (RA)… keeping an eye on each other laterally and speaking up when we see what we perceive as mistakes.
It was recommended by Pericles around 300 BCE… then later by Adam Smith and the founders of our era. Indeed, humanity only ever found one difficult but essential trick for getting past our human yen for lies and delusion.
Yeah, sometimes it’s the critic who is wrong! Still, one result is a system that’s open enough to spot most errors – even those by the mighty – and criticize them (sometimes just in time and sometimes too late) so that many get corrected. We aren’t yet great at it! Though better than all prior generations. And at the vanguard in this process is science.
Sure, scientists are human and subject to the same temptations to self-deceive or even tell lies. In training*, we are taught to recite the sacred catechism of science: “I might be wrong!” That core tenet – plus piles of statistical and error-checking techniques – made modern science different – and vastly more effective (and less hated) -- than all or any previous priesthoods. Still, we remain human. And delusion in science can have weighty consequences.
(*Which may help explain the oligarchy's current all-out war against science and universities.)
He notes, “Science has a fraud problem. Highly cited research is often based on faked data, which causes other researchers to pursue false leads. In medical research, the time wasted by followup studies can delay the discovery of effective treatments for serious diseases, potentially causing millions of lives to be lost.”
As I said: that’s an exaggeration – one that feeds into today’s Mad Right in its all-out war vs every fact-using profession. (Not just science, but also teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.) The examples that he cites were discovered and denounced BY science! And the ratio of falsehood is orderd of magnitude less than any other realm of huiman endeavor.
Still, the essay is worth reading for its proposed solution. Which boils down to do more reciprocal accountability, only do it better!
The proposal would start with the powerful driver of scientific RA – the fact that most scientists are among the most competitive creatures that this planet ever produced – nothing like the lemming, paradigm-hugger disparagement-image that's spread by some on the far-left and almost everyone on today’s entire gone-mad right.
Only this author proposes we then augment that competitiveness with whistle blower rewards, to incentivize the cross-checking process with cash prizes.
Do you know the “hype cycle curve”? That’s an observational/pragmatic correlation tool devised by Gartner in the 90s, for how new technologies often attract heaps of zealous attention, followed by a crash of disillusionment, when even the most promising techs encounter obstacles to implementation, and many just prove wrong. This trough is followed, in a few cases, by a more grounded rise in solid investment, as productivity takes hold. (It happened repeatedly with railroads and electricity.) The inimitable Sabine Hossenfelder offers a podcast about this, using recent battery tech developments as examples.
The takeaways: yes, it seems that some battery techs may deliver major good news pretty soon. And remember this ‘hype cycle’ thing is correlative, not causative. It has almost no predictive utility in individual cases.
But the final take-away is also important. That progress IS being made! Across many fronts and very rapidly. And every single thing you are being told about the general trend toward sustainable technologies by the remnant, withering denialist cult is a pants-on-fire lie.
Take this jpeg I just copied from the newsletter of Peter Diamandis, re: the rapidly maturing tech of perovskite based solar cells, which have a theoretically possible efficiency of 66%, double that of silicon.
(And many of you first saw the word “perovskite” in my novel Earth, wherein I pointed out that most high-temp superconductors take that mineral form… and so does most of the Earth’s mantle. Put those two together! As I did, in that novel.)
Dosubscribeto Peter’s Abundance Newsletter, as an antidote to the gloom that’s spread by today’s entire right and much of today’s dour, farthest-fringe-left. The latter are counter-productive sanctimony junkies, irritating but statistically unimportant as we make progress without much help from them.
The former are a now a science-hating treason-cult that’s potentially lethal to our civilization and world and our children. And for those neighbors of ours, the only cure will be victory – yet again, and with malice toward none – by the Union side in this latest phase of our recurring confederate fever.
== A final quirky thought ==
Has anyone else noticed how many traits of AI chat/image-generation etc - including the delusions, the weirdly logical illogic, and counter-factual internal consistency - are very similar to dreams?
Addendum: When (seldom) a dream is remembered well, the narrative structure can be recited and recorded. 100 years of freudian analysts have a vast store of such recitations that could be compared to AI-generated narratives. Somebody unleash the research!
It bugs me: all the US civil servants making a 'gesture' of resigning, when they are thus undermining the standing of the Civil Service Act, under which they can demand to be fired only for cause. And work to rule, stymieing the loony political appointees, as in YES, MINISTER.
Or moronic media who are unable to see that most of the firings are for show, to distract from the one set that matters to the oligarchs. Ever since 2021 they have been terrified of the Pelosi bill that fully funded the starved and bedraggled IRS for the 1st time in 30 years. The worst oligarchs saw jail - actual jail - looming on the horizon and are desperate to cripple any looming audits. All the other 'doge' attacks have that underlying motive, to distract from what foreign and domestic oligarchs care about..
Weakening the American Pax -which gave humanity by far its greatest & best era - IS the central point. Greenland is silliness, of course. The Mercator projection makes DT think he'd be making a huge Louisiana Purchase. But he's too cheap to make the real deal... offer each Greenland native $1million. Actually, just 55% of the voters. That'd be $20 Billion. Heck it's one of the few things where I hope he succeeds. Carve his face on a dying glacier.
Those mocking his Canada drool are fools. Sure, it's dumb and Canadians want no part of it. But NO ONE I've seen has simply pointed out .. that Canada has ten provinces, and three territories, all with more population than Greenland. 8 of ten would be blue and the other two are Eisenhowe or Reagan red and would tire of DT, fast. So, adding Greenlan,d we have FOURTEEN new states, none of whom would vote for today's Putin Party. That one fact would shut down MAGA yammers about Canada instantly.
Ukraine is simple: Putin is growing desperate and is demanding action from his puppet. I had fantasized that Trump might now feel so safe that he could ride out any blackmail kompromat that Vlad is threatening him with. But it's pretty clear that KGB blackmailers run the entire GOP.
Author: Eva C. Stein After the service, they didn’t speak much. They walked through the old arcade – a fragment of the city’s former network. The glass canopy had long since shattered. Bio-moss cushioned the broken frames. Vines, engineered to reclaim derelict structures, crept along the walls. Mae’s jacket was too thin for the chill […]
Have you ever found yourself in the situation where you had no or
anonymized logs and still wanted to figure out where your traffic was
coming from?
Or you have multiple upstreams and are looking to see if you can save
fees by getting into peering agreements with some other party?
Or your site is getting heavy load but you can't pinpoint it on a
single IP and you suspect some amoral corporation is training their
degenerate AI on your content with a bot army?
(You might be getting onto something there.)
If that rings a bell, read on.
TL;DR:
... or just skip the cruft and install asncounter:
pip install asncounter
Also available in Debian 14 or later, or possibly in Debian 13
backports (soon to be released) if people are interested:
tcpdump -q -i eth0 -n -Q in "tcp and tcp[tcpflags] & tcp-syn != 0 and (port 80 or port 443)" | asncounter --input-format=tcpdump --repl
Read on for why this matters, and why I wrote yet another weird tool
(almost) from scratch.
Background and manual work
This is a tool I've been dreaming of for a long, long time. Back in
2006, at Koumbit a colleague had setup TAS ("Traffic
Accounting System", "Система учета трафика" in Russian, apparently), a
collection of Perl script that would do per-IP accounting. It was
pretty cool: it would count bytes per IP addresses and, from that, you
could do analysis. But the project died, and it was kind of bespoke.
Fast forward twenty years, and I find myself fighting off bots at the
Tor Project (the irony...), with our GitLab suffering pretty bad
slowdowns (see issue tpo/tpa/team#41677 for the latest public
issue, the juicier one is confidential, unfortunately).
(We did have some issues caused by overloads in CI, as we host, after
all, a fork of Firefox, which is a massive repository, but the
applications team did sustained, awesome work to fix issues on that
side, again and again (see tpo/applications/tor-browser#43121 for
the latest, and tpo/applications/tor-browser#43121 for some
pretty impressive correlation work, I work with really skilled
people). But those issues, I believe were fixed.)
So I had the feeling it was our turn to get hammered by the AI
bots. But how do we tell? I could tell something was hammering at
the costly /commit/ and (especially costly) /blame/ endpoint. So
at first, I pulled out the trusted awk, sort | uniq -c | sort -n |
tail pipeline I am sure others have worked out before:
For people new to this, that pulls the first field out of web server
log files, sort the list, counts the number of unique entries, and
sorts that so that the most common entries (or IPs) show up first,
then show the top 10.
That, other words, answers the question of "which IP address visits
this web server the most?" Based on this, I found a couple of IP
addresses that looked like Alibaba. I had already addressed an abuse
complaint to them (tpo/tpa/team#42152) but never got a response,
so I just blocked their entire network blocks, rather violently:
for cidr in 47.240.0.0/14 47.246.0.0/16 47.244.0.0/15 47.235.0.0/16 47.236.0.0/14; do
iptables-legacy -I INPUT -s $cidr -j REJECT
done
That made Ali Baba and his forty thieves (specifically their
AL-3 network go away, but our load was still high, and I was
still seeing various IPs crawling the costly endpoints. And this time,
it was hard to tell who they were: you'll notice all the Alibaba IPs
are inside the same 47.0.0.0/8 prefix. Although it's not a /8
itself, it's all inside the same prefix, so it's visually easy to
pick it apart, especially for a brain like mine who's stared too long
at logs flowing by too fast for their own mental health.
What I had then was different, and I was tired of doing the stupid
thing I had been doing for decades at this point. I had recently
stumbled upon pyasn recently (in January, according to my notes)
and somehow found it again, and thought "I bet I could write a quick
script that loops over IPs and counts IPs per ASN".
(Obviously, there are lots of other tools out there for that kind of
monitoring. Argos, for example, presumably does this, but it's a kind
of a huge stack. You can also get into netflows, but there's serious
privacy implications with those. There are also lots of per-IP
counters like promacct, but that doesn't scale.
Or maybe someone already had solved this problem and I just wasted a
week of my life, who knows. Someone will let me know, I hope, either
way.)
ASNs and networks
A quick aside, for people not familiar with how the internet
works. People that know about ASNs, BGP announcements and so on can
skip.
The internet is the network of networks. It's made of multiple
networks that talk to each other. The way this works is there is a
Border Gateway Protocol (BGP), a relatively simple TCP-based protocol,
that the edge routers of those networks used to announce each other
what network they manage. Each of those network is called an
Autonomous System (AS) and has an AS number (ASN) to uniquely identify
it. Just like IP addresses, ASNs are allocated by IANA and local
registries, they're pretty cheap and useful if you like running your
own routers, get one.
When you have an ASN, you'll use it to, say, announce to your BGP
neighbors "I have 198.51.100.0/24" over here and the others might
say "okay, and I have 216.90.108.31/19 over here, and I know of this
other ASN over there that has 192.0.2.1/24 too! And gradually, those
announcements flood the entire network, and you end up with each BGP
having a routing table of the global internet, with a map of which
network block, or "prefix" is announced by which ASN.
It's how the internet works, and it's a useful thing to know, because
it's what, ultimately, makes an organisation responsible for an IP
address. There are "looking glass" tools like the one provided by
routeviews.org which allow you to effectively run "trace routes"
(but not the same as traceroute, which actively sends probes from
your location), type an IP address in that form to fiddle with it. You
will end up with an "AS path", the way to get from the looking glass
to the announced network. But I digress, and that's kind of out of
scope.
Point is, internet is made of networks, networks are autonomous
systems (AS) and they have numbers (ASNs), and they announced IP
prefixes (or "network blocks") that ultimately tells you who is
responsible for traffic on the internet.
Introducing asncounter
So my goal was to get from "lots of IP addresses" to "list of ASNs",
possibly also the list of prefixes (because why not). Turns out pyasn
makes that really easy. I managed to build a prototype in probably
less than an hour, just look at the first version, it's 44 lines
(sloccount) of Python, and it works, provided you have already
downloaded the required datafiles from routeviews.org. (Obviously, the
latest version is longer at close to 1000 lines, but it downloads the
data files automatically, and has many more features).
The way the first prototype (and later versions too, mostly) worked is
that you feed it a list of IP addresses on standard input, it looks up
the ASN and prefix associated with the IP, and increments a counter
for those, then print the result.
That showed me something like this:
root@gitlab-02:~/anarcat-scripts# tcpdump -q -i eth0 -n -Q in "(udp or tcp)" | ./asncounter.py --tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
INFO: collecting IPs from stdin, using datfile ipasn_20250523.1600.dat.gz
INFO: loading datfile /root/.cache/pyasn/ipasn_20250523.1600.dat.gz...
INFO: loading /root/.cache/pyasn/asnames.json
ASN count AS
136907 7811 HWCLOUDS-AS-AP HUAWEI CLOUDS, HK
[----] 359 [REDACTED]
[----] 313 [REDACTED]
8075 254 MICROSOFT-CORP-MSN-AS-BLOCK, US
[---] 164 [REDACTED]
[----] 136 [REDACTED]
24940 114 HETZNER-AS, DE
[----] 98 [REDACTED]
14618 82 AMAZON-AES, US
[----] 79 [REDACTED]
prefix count
166.108.192.0/20 1294
188.239.32.0/20 1056
166.108.224.0/20 970
111.119.192.0/20 951
124.243.128.0/18 667
94.74.80.0/20 651
111.119.224.0/20 622
111.119.240.0/20 566
111.119.208.0/20 538
[REDACTED] 313
Even without ratios and a total count (which will come later), it was
quite clear that Huawei was doing something big on the server. At that
point, it was responsible for a quarter to half of the traffic on our
GitLab server or about 5-10 queries per second.
But just looking at the logs, or per IP hit counts, it was really hard
to tell. That traffic is really well distributed. If you look more
closely at the output above, you'll notice I redacted a couple of
entries except major providers, for privacy reasons. But you'll also
notice almost nothing is redacted in the prefix list, why? Because
all of those networks are Huawei! Their announcements are kind of
bonkers: they have hundreds of such prefixes.
Now, clever people in the know will say "of course they do, it's an
hyperscaler; just ASN14618 (AMAZON-AES) there is way more
announcements, they have 1416 prefixes!" Yes, of course, but they are
not generating half of my traffic (at least, not yet). But even then:
this also applies to Amazon! This way of counting traffic is way
more useful for large scale operations like this, because you group by
organisation instead of by server or individual endpoint.
And, ultimately, this is why asncounter matters: it allows you to
group your traffic by organisation, the place you can actually
negotiate with.
Now, of course, that assumes those are entities you can talk with. I
have written to both Alibaba and Huawei, and have yet to receive a
response. I assume I never will. In their defence, I wrote in English,
perhaps I should have made the effort of translating my message in
Chinese, but then again English is the Lingua Franca of the
Internet, and I doubt that's actually the issue.
The Huawei and Facebook blocks
Another aside, because this is my blog and I am not looking for a
Pullitzer here.
So I blocked Huawei from our GitLab server (and before you tear your
shirt open: only our GitLab server, everything else is still
accessible to them, including our email server to respond to my
complaint). I did so 24h after emailing them, and after examining
their user agent (UA) headers. Boy that was fun. In a sample of 268
requests I analyzed, they churned out 246 different UAs.
At first glance, they looked legit, like:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36
Safari on a Mac, so far so good. But when you start digging, you
notice some strange things, like here's Safari running on Linux:
Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.457.0 Safari/534.3
Was Safari ported to Linux? I guess that's.. possible?
But here is Safari running on a 15 year old Ubuntu release (10.10):
Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Ubuntu/10.10 Chromium/12.0.702.0 Chrome/12.0.702.0 Safari/534.24
Speaking of old, here's Safari again, but this time running on Windows
NT 5.1, AKA Windows XP, released 2001, EOL since 2019:
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-CA) AppleWebKit/534.13 (KHTML like Gecko) Chrome/9.0.597.98 Safari/534.13
Really?
Here's Firefox 3.6, released 14 years ago, there were quite a lot of
those:
Mozilla/5.0 (Windows; U; Windows NT 6.1; lt; rv:1.9.2) Gecko/20100115 Firefox/3.6
I remember running those old Firefox releases, those were the days.
But to me, those look like entirely fake UAs, deliberately rotated to
make it look like legitimate traffic.
In comparison, Facebook seemed a bit more legit, in the sense that
they don't fake it. most hits are from:
crawls the web for use cases such as training AI models or improving products by indexing content directly
From what I could tell, it was even respecting our rather liberal
robots.txt rules, in that it wasn't crawling the sprawling /blame/
or /commit/ endpoints, explicitly forbidden by robots.txt.
So I've blocked the Facebook bot in robots.txt and, amazingly, it
just went away. Good job Facebook, as much as I think you've given the
empire to neo-nazis, cause depression and genocide, you know how to
run a crawler, thanks.
Huawei was blocked at the web server level, with a friendly 429 status
code telling people to contact us (over email) if they need help. And
they don't care: they're still hammering the server, from what I can
tell, but then again, I didn't block the entire ASN just yet, just the
blocks I found crawling the server over a couple hours.
A full asncounter run
So what does a day in asncounter look like? Well, you start with a
problem, say you're getting too much traffic and want to see where
it's from. First you need to sample it. Typically, you'd do that with
tcpdump or tailing a log file:
If you really get a lot of traffic, you might want to get a subset
of that to avoid overwhelming asncounter, it's not fast enough to do
multiple gigabit/second, I bet, so here's only incoming SYN IPv4
packets:
tcpdump -q -n -Q in "tcp and tcp[tcpflags] & tcp-syn != 0 and (port 80 or port 443)" | asncounter --input-format=tcpdump --repl
In any case, at this point you're staring at a process, just sitting
there. If you passed the --repl or --manhole arguments, you're
lucky: you have a Python shell inside the program. Otherwise, send
SIGHUP to the thing to have it dump the nice tables out:
pkill -HUP asncounter
Here's an example run:
> awk '{print $2}' /var/log/apache2/*access*.log | asncounter
INFO: using datfile ipasn_20250527.1600.dat.gz
INFO: collecting addresses from <stdin>
INFO: loading datfile /home/anarcat/.cache/pyasn/ipasn_20250527.1600.dat.gz...
INFO: finished reading data
INFO: loading /home/anarcat/.cache/pyasn/asnames.json
count percent ASN AS
12779 69.33 66496 SAMPLE, CA
3361 18.23 None None
366 1.99 66497 EXAMPLE, FR
337 1.83 16276 OVH, FR
321 1.74 8075 MICROSOFT-CORP-MSN-AS-BLOCK, US
309 1.68 14061 DIGITALOCEAN-ASN, US
128 0.69 16509 AMAZON-02, US
77 0.42 48090 DMZHOST, GB
56 0.3 136907 HWCLOUDS-AS-AP HUAWEI CLOUDS, HK
53 0.29 17621 CNCGROUP-SH China Unicom Shanghai network, CN
total: 18433
count percent prefix ASN AS
12779 69.33 192.0.2.0/24 66496 SAMPLE, CA
3361 18.23 None
298 1.62 178.128.208.0/20 14061 DIGITALOCEAN-ASN, US
289 1.57 51.222.0.0/16 16276 OVH, FR
272 1.48 2001:DB8::/48 66497 EXAMPLE, FR
235 1.27 172.160.0.0/11 8075 MICROSOFT-CORP-MSN-AS-BLOCK, US
94 0.51 2001:DB8:1::/48 66497 EXAMPLE, FR
72 0.39 47.128.0.0/14 16509 AMAZON-02, US
69 0.37 93.123.109.0/24 48090 DMZHOST, GB
53 0.29 27.115.124.0/24 17621 CNCGROUP-SH China Unicom Shanghai network, CN
Those numbers are actually from my home network, not GitLab. Over
there, the battle still rages on, but at least the vampire bots are
banging their heads against the solid Nginx wall instead of eating the
fragile heart of GitLab. We had a significant improvement in latency
thanks to the Facebook and Huawei blocks... Here are the "workhorse
request duration stats" for various time ranges, 20h after the block:
range
mean
max
stdev
20h
449ms
958ms
39ms
7d
1.78s
5m
14.9s
30d
2.08s
3.86m
8.86s
6m
901ms
27.3s
2.43s
We went from two seconds mean to 500ms! And look at that standard deviation!
39ms! It was ten seconds before! I doubt we'll keep it that way very
long but for now, it feels like I won a battle, and I didn't even have
to setup anubis or go-away, although I suspect that will
unfortunately come.
Note that asncounter also supports exporting Prometheus metrics, but
you should be careful with this, as it can lead to cardinal explosion,
especially if you track by prefix (which can be disabled with
--no-prefixes`.
Folks interested in more details should read the fine manual for
more examples, usage, and discussion. It shows, among other things,
how to effectively block lots of networks from Nginx, aggregate
multiple prefixes, block entire ASNs, and more!
So there you have it: I now have the tool I wish I had 20 years
ago. Hopefully it will stay useful for another 20 years, although I'm
not sure we'll have still have internet in 20
years.
I welcome constructive feedback, "oh no you rewrote X", Grafana
dashboards, bug reports, pull requests, and "hell yeah"
comments. Hacker News, let it rip, I know you can give me another
juicy quote for my blog.
This work was done as part of my paid work for the Tor Project,
currently in a fundraising drive, give us money if you like what you
read.
I previously wrote a blog post Why Clusters Usually Don’t Work [2] and I believe that all the points there are valid today – and possibly exacerbated by clusters getting less direct use as clustering is increasingly being done by hyperscale providers.
Take a basic need, a MySQL or PostgreSQL database for example. You want it to run and basically do the job and to have good recovery options. You could set it up locally, run backups, test the backups, have a recovery plan for failures, maybe have a hot-spare server if it’s really important, have tests for backups and hot-spare server, etc. Then you could have documentation for this so if the person who set it up isn’t available when there’s a problem they will be able to find out what to do. But the hyperscale option is to just select a database in your provider and have all this just work. If the person who set it up isn’t available for recovery in the event of failure the company can just put out a job advert for “person with experience on cloud company X” and have them just immediately go to work on it.
I don’t like hyperscale providers as they are all monopolistic companies that do anti-competitive actions. Google should be broken up, Android development and the Play Store should be separated from Gmail etc which should be separated from search and adverts, and all of them should be separated from the GCP cloud service. Amazon should be broken up, running the Amazon store should be separated from selling items on the store, which should be separated from running a video on demand platform, and all of them should be separated from the AWS cloud. Microsoft should be broken up, OS development should be separated from application development all of that should be separated from cloud services (Teams and Office 365), and everything else should be separate from the Azure cloud system.
But the cloud providers offer real benefits at small scale. Running a MySQL or PostgreSQL database for local services is easy, it’s a simple apt command to install it and then it basically works. Doing backup and recovery isn’t so easy. One could say “just hire competent people” but if you do hire competent people do you want them running MySQL databases etc or have them just click on the “create mysql database” option on a cloud control panel and then move on to more important things?
The Debian packaging of Open Stack looks interesting [4], it’s a complete setup for running your own hyper scale cloud service. For medium and large organisations running Open Stack could be a good approach. But for small organisations it’s cheaper and easier to just use a cloud service to run things.
The issue of when to run things in-house and when to put them in the cloud is very complex. I think that if the organisation is going to spend less money on cloud services than on the salary of one sysadmin then it’s probably best to have things in the cloud. When cloud costs start to exceed the salary of one person who manages systems then having them spend the extra time and effort to run things locally starts making more sense. There is also an opportunity cost in having a good sysadmin work on the backups for all the different systems instead of letting the cloud provider just do it. Another possibility of course is to run things in-house on low end hardware and just deal with the occasional downtime to save money. Knowingly choosing less reliability to save money can be quite reasonable as long as you have considered the options and all the responsible people are involved in the discussion.
The one situation that I strongly oppose is having hyper scale services setup by people who don’t understand them. Running a database server on a cloud service because you don’t want to spend the time managing it is a reasonable choice in many situations. Running a database server on a cloud service because you don’t understand how to setup a database server is never a good choice. While the cloud services are quite resilient there are still ways of breaking the overall system if you don’t understand it. Also while it is quite possible for someone to know how to develop for databases including avoiding SQL injection etc but be unable to setup a database server that’s probably not going to be common, probably if someone can’t set it up (a generally easy task) then they can’t do the hard tasks of making it secure.
High-roller
Matthew D.
fears Finance.
"This is from our corporate expense system. Will they flag my expenses in the April-December quarter as too high? And do we really need a search function for a list of 12 items?"
Tightfisted
Adam R.
begrudges a trifling sum.
"The tipping culture is getting out of hand. After I chose 'Custom Tip'
for some takeout, they filled out the default tip with a few extra femtocents. What a rip!"
Cool Customer
Reinier B.
sums this up:
"I got some free B&J icecream a while back. Since one of them was
priced at €0.01, the other one obviously had to cost zero
point minus 1 euros to make a total of zero euro. Makes sense. Or
probably not."
An anonymous browniedad is ready to pack his poptart off for the summer.
"I know {First Name} is really excited for camp..."
Kudos on getting Mom to agree to that name choice!
Finally, another anonymous assembler's retrospective visualisation.
"CoPilot rendering a graphical answer of the semantics of a pointer.
Point taken. "
There's no error'd
here really, but I'm wondering how long before this kind of
wtf illustration lands somewhere "serious".
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
This approach of having 2 AI systems where one processes user input and the second performs actions on quarantined data is good and solves some real problems. But I think the bigger issue is the need to do this. Why not have a multi stage approach, instead of a single user input to do everything (the example given is “Can you send Bob the document he requested in our last meeting? Bob’s email and the document he asked for are in the meeting notes file”) you could have “get Bob’s email address from the meeting notes file” followed by “create a new email to that address” and “find the document” etc.
A major problem with many plans for ML systems is that they are based around automating relatively simple tasks. The example of sending an email based on meeting notes is a trivial task that’s done many times a day but for which expressing it verbally isn’t much faster than doing it the usual way. The usual way of doing such things (manually finding the email address from the meeting notes etc) can be accelerated without ML by having a “recent documents” access method that gets the notes, having the email address be a hot link to the email program (IE wordprocessor or note taking program being able to call the MUA), having a “put all data objects of type X into the clipboard (where X can be email address, URL, filename, or whatever), and maybe optimising the MUA UI. The problems that people are talking about solving via ML and treating everything as text to be arbitrarily parsed can in many cases by solved by having the programs dealing with the data know what they have and have support for calling system services accordingly.
The blog post suggests a problem of “user fatigue” from asking the user to confirm all actions, that is a real concern if the system is going to automate everything such that the user gives a verbal description of the problem and then says “yes” many times to confirm it. But if the user is at every step of the way pushing the process “take this email address” “attach this file” it won’t be a series of “yes” operations with a risk of saying “yes” once too often.
I think that one thing that should be investigated is better integration between services to allow working live on data. If in an online meeting someone says “I’ll work on task A please send me an email at the end of the meeting with all issues related to it” then you should be able to click on their email address in the meeting software to bring up the MUA to send a message and then just paste stuff in. The user could then not immediately send the message and clicking on the email address again would bring up the message in progress to allow adding to it (the behaviour of most MUAs of creating a new message for every click on a mailto:// URL is usually not what you desire). In this example you could of course use ALT-TAB or other methods to switch windows to the email, but imagine the situation of having 5 people in the meeting who are to be emailed about different things and that wouldn’t scale.
Another thing for the meeting example is that having a text chat for a video conference is a standard feature now and being able to directly message individuals is available in BBB and probably some other online meeting systems. It shouldn’t be hard to add a feature to BBB and similar programs to have each user receive an email at the end of the meeting with the contents of every DM chat they were involved in and have everyone in the meeting receive an emailed transcript of the public chat.
In conclusion I think that there are real issues with ML security and something like this technology is needed. But for most cases the best option is to just not have ML systems do such things. Also there is significant scope for improving the integration of various existing systems in a non-ML way.
Here’s my 68th monthly but brief update about the activities I’ve done in the F/L/OSS world.
Debian
This was my 77th month of actively contributing to Debian.
I became a DM in late March 2019 and a DD on Christmas ‘19! \o/
This month I’ve just been sort of MIA, mostly because of a combination of the Canonical engineering sprints in Frankfurt, a bit of vacation in Italy, and then being sick. So didn’t really get much done in Debian this month.
Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:
Prepared for the engineering sprints in Frankfurt.
Delivered the Ubuntu knowledge sharing session during the sprints.
Released the first monthly snapshot of Ubuntu 25.10.
Got a recognition award for driving the Plucky Puffin release, nominated by Florent. \o/
Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
And Debian Extended LTS (ELTS) is its sister project, extending support to the buster, stretch, and jessie release (+2 years after LTS support).
This was my 68th month as a Debian LTS and 55th month as a Debian ELTS paid contributor.
Due to a combination of the Canonical engineering sprints in Frankfurt, a bit of vacation in Italy, and then being sick, I was barely able to do (E)LTS work. So this month, I worked for only 1.00 hours for LTS and 0 hours for ELTS.
I did the following things:
[LTS] Attended the hourly LTS meeting on IRC. Summary here.
Author: David C. Nutt It was an alien invasion, not in the sense of “War of the Worlds” but more like what historians called the “British Invasion” but without the Beatles. What invaded us was close to five million overprivileged alien tourists, all here for one reason: to inhale us. No, this is no metaphor. […]
The U.S. government today imposed economic sanctions on Funnull Technology Inc., a Philippines-based company that provides computer infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was being used as a content delivery network that catered to cybercriminals seeking to route their traffic through U.S.-based cloud providers.
“Americans lose billions of dollars annually to these cyber scams, with revenues generated from these crimes rising to record levels in 2024,” reads a statement from the U.S. Department of the Treasury, which sanctioned Funnull and its 40-year-old Chinese administrator Liu Lizhi. “Funnull has directly facilitated several of these schemes, resulting in over $200 million in U.S. victim-reported losses.”
The Treasury Department said Funnull’s operations are linked to the majority of virtual currency investment scam websites reported to the FBI. The agency said Funnull directly facilitated pig butchering and other schemes that resulted in more than $200 million in financial losses by Americans.
Pig butchering is a rampant form of fraud wherein people are lured by flirtatious strangers online into investing in fraudulent cryptocurrency trading platforms. Victims are coached to invest more and more money into what appears to be an extremely profitable trading platform, only to find their money is gone when they wish to cash out.
The scammers often insist that investors pay additional “taxes” on their crypto “earnings” before they can see their invested funds again (spoiler: they never do), and a shocking number of people have lost six figures or more through these pig butchering scams.
KrebsOnSecurity’s January story on Funnull was based on research from the security firm Silent Push, which discovered in October 2024 that a vast number of domains hosted via Funnull were promoting gambling sites that bore the logo of the Suncity Group, a Chinese entity named in a 2024 UN report (PDF) for laundering millions of dollars for the North Korean state-sponsored hacking group Lazarus.
Silent Push found Funnull was a criminal content delivery network (CDN) that carried a great deal of traffic tied to scam websites, funneling the traffic through a dizzying chain of auto-generated domain names and U.S.-based cloud providers before redirecting to malicious or phishous websites. The FBI has released a technical writeup (PDF) of the infrastructure used to manage the malicious Funnull domains between October 2023 and April 2025.
A graphic from the FBI explaining how Funnull generated a slew of new domains on a regular basis and mapped them to Internet addresses on U.S. cloud providers.
Silent Push revisited Funnull’s infrastructure in January 2025 and found Funnull was still using many of the same Amazon and Microsoft cloud Internet addresses identified as malicious in its October report. Both Amazon and Microsoft pledged to rid their networks of Funnull’s presence following that story, but according to Silent Push’s Zach Edwards only one of those companies has followed through.
Edwards said Silent Push no longer sees Microsoft Internet addresses showing up in Funnull’s infrastructure, while Amazon continues to struggle with removing Funnull servers, including one that appears to have first materialized in 2023.
“Amazon is doing a terrible job — every day since they made those claims to you and us in our public blog they have had IPs still mapped to Funnull, including some that have stayed mapped for inexplicable periods of time,” Edwards said.
Amazon said its Amazon Web Services (AWS) hosting platform actively counters abuse attempts.
“We have stopped hundreds of attempts this year related to this group and we are looking into the information you shared earlier today,” reads a statement shared by Amazon. “If anyone suspects that AWS resources are being used for abusive activity, they can report it to AWS Trust & Safety using the report abuse form here.”
U.S. based cloud providers remain an attractive home base for cybercriminal organizations because many organizations will not be overly aggressive in blocking traffic from U.S.-based cloud networks, as doing so can result in blocking access to many legitimate web destinations that are also on that same shared network segment or host.
What’s more, funneling their bad traffic so that it appears to be coming out of U.S. cloud Internet providers allows cybercriminals to connect to websites from web addresses that are geographically close(r) to their targets and victims (to sidestep location-based security controls by your bank, for example).
Funnull is not the only cybercriminal infrastructure-as-a-service provider that was sanctioned this month: On May 20, 2025, the European Unionimposed sanctions on Stark Industries Solutions, an ISP that materialized at the start of Russia’s invasion of Ukraine and has been used as a global proxy network that conceals the true source of cyberattacks and disinformation campaigns against enemies of Russia.
In May 2024, KrebsOnSecurity published a deep dive on Stark Industries Solutions that found much of the malicious traffic traversing Stark’s network (e.g. vulnerability scanning and password brute force attacks) was being bounced through U.S.-based cloud providers. My reporting showed how deeply Stark had penetrated U.S. ISPs, and that its co-founder for many years sold “bulletproof” hosting services that told Russian cybercrime forum customers they would proudly ignore any abuse complaints or police inquiries.
The homepage of Stark Industries Solutions.
That story examined the history of Stark’s co-founders, Moldovan brothers Ivan and Yuri Neculiti, who each denied past involvement in cybercrime or any current involvement in assisting Russian disinformation efforts or cyberattacks. Nevertheless, the EU sanctioned both brothers as well.
The EU said Stark and the Neculti brothers “enabled various Russian state-sponsored and state-affiliated actors to conduct destabilising activities including coordinated information manipulation and interference and cyber-attacks against the Union and third countries by providing services intended to hide these activities from European law enforcement and security agencies.”
Being the opening talk, we were still sorting out projector issues
when I started so I forgot to set a timer, and consequently ran out of
time like a newbie. It occured to me that I could simply re-record the
talk in front of my slides just as I do for my STAT 447 students. So I sat down this
morning and did this, and the video is now online:
RcppDate wraps
the featureful date
library written by Howard
Hinnant for use with R. This header-only modern C++ library has been
in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17
what will is (with minor modifications) the ‘date’ library in C++20. The
RcppDate package
adds no extra R or C++ code and can therefore be a zero-cost dependency
for any other project; yet a number of other projects decided to
re-vendor it resulting in less-efficient duplication. Oh well. C’est
la vie.
This release syncs with upstream release 3.0.4 made yesterday which
contains a few PRs (including one by us) for
the clang++-20 changes some of which we already had in release
0.0.5. We also made a routine update to the continuous
integration.
There’s a new cybersecurity awareness campaign: Take9. The idea is that people—you, me, everyone—should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.
There’s a website—of course—and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.
First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing—do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users.
Second, it largely won’t help. The industry should know because we tried it a decade ago. “Stop. Think. Connect.” was an awarenesscampaign from 2016, by the Department of Homeland Security—this was before CISA—and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn’t work then, either.
Take9’s website says, “Science says: In stressful situations, wait 10 seconds before responding.” The problem with that is that clicking on a link is not a stressful situation. It’s normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment.
And there is no basis in science for it. It’s a folk belief, all over the Internet but with no actual research behind it—like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.
Pausing Adds Little
Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn’t habit alone. The problem is that people aren’t able to differentiate between something legitimate and an attack.
The Take9 website says that nine seconds is “time enough to make a better decision,” but there’s no use telling people to stop and think if they don’t know what to think about after they’ve stopped. Pause for nine seconds and… do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don’t have the right knowledge, pausing for longer—even a minute—will do nothing to add knowledge.
The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge—not knowing what’s risky and what isn’t. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails.
These pathways don’t always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That’s why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.
A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.
This means that pauses need to be context specific. Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful. We could imagine an AI plug-in that warns: “This isn’t how Bruce normally writes.” But of course, there’s an arms race in play; the bad guys will use these systems to figure out how to bypass them.
This is all hard. The old cues aren’t there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn’t enough context in a text message for the system to flag. In voice or video, it’s much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people’s own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way.
Even if we do this all well and correctly, we can’t make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt—two people who you’d expect to be excellent scam detectors—got phished. In both cases, it was just the right message at just the right time.
It’s even worse if you’re a large organization. Security isn’t based on the average employee’s ability to detect a malicious email; it’s based on the worst person’s inability—the weakest link. Even if awareness raises the average, it won’t help enough.
Don’t Place Blame Where It Doesn’t Belong
Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What’s not said, but certainly implied, is that if they don’t take that pause and don’t make those better decisions, then they’re to blame when the attack occurs.
That’s simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It’s not the user’s fault if they click on a link and it infects their system. It’s not their fault if they plug in a strange USB drive or ignore a warning message that they can’t understand. It’s not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence. We’re using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: “Users are not the enemy.”
We wouldn’t accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: “Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks’ hands are clean.” Aviation: “Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane’s maintenance log, ask the pilots if they feel rested.” This is obviously ridiculous advice. The average person doesn’t have the training or expertise to evaluate restaurant or aircraft safety—and we don’t expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry.
But—we get it—the government isn’t going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work—more work than just an ad campaign and a slick video.
This essay was written with Arun Vishwanath, and originally appeared in Dark Reading.
Nina's team has a new developer on the team. They're not a junior developer, though Nina wishes they could replace this developer with a junior. Inexperience is better than whatever this Java code is.
We start by casting options into an array of Objects. That's already a code stench, but we actually don't even use the test variable and instead just redo the cast multiple times.
But worse than that, we cast to an array of object, access an element, and then cast that element to a collection type. I do not know what is in the options variable, but based on how it gets used, I don't like it. What it seems to be is a class (holding different options as fields) rendered as an array (holding different options as elements).
The new developer (ab)uses this pattern everywhere.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: K. Andrus Where was the best place to murder someone and get away with it? A question that had been fun to ponder, back when Albert had been at home accompanied by nobody else but a chilled glass of scotch, the comforting roar of a June snowstorm, and his most recent work-in-progress novel. Yet […]
Debian 13 "Trixie" full freeze has started 2025-05-17, so this is
a good time to take a look at some of the features, that this release
will bring. Here we will focus on packages related to XMPP, a.k.a.
Jabber.
XMPP is a universal communication protocol for instant messaging, push
notifications, IoT, WebRTC, and social applications. It has existed since
1999, originally called "Jabber", it has a diverse and active developers
community.
Clients
Dino, a modern XMPP client has been upgraded from 0.4.2 to
0.5.0
Dino now uses OMEMO encryption by default. It also supports
XEP-0447: Stateless File Sharing for unencrypted file
transfers. Users can now see preview images or other file details
before downloading the file. Multiple widgets are redesigned to be
compatible with mobile devices, e.g. running Mobian.
Kaidan, a simple and user-friendly Jabber/XMPP client is
upgraded from 0.8.0 to 0.12.2
Kaidan supports end-to-end encryption via OMEMO 2, Automatic Trust
Management and XMPP Providers. It has been migrated
to QT 6 and many features have been added: XEP-0444: Message
Reactions, XEP-0461: Message Replies,
chat pinning, inline audio player, chat list filtering, local
message removal, etc.
Libervia is upgraded from 0.9.0~hg3993 to
0.9.0~hg4352
Among other features, it now also contains a gateway to ActivityPub,
e.g. to Mastodon.
Poezio, a console based XMPP client as been updated from 0.14
to 0.15.0
Better self-ping support. Use the system CA store by default.
Profanity, a console based XMPP client has been
upgraded from 0.13.1 to 0.15.0.
Add support for XEP-0054: vcard-temp, Improve MAM
support, show encryption for messages from history and handle alt+enter
as newline char.
Psi+, a QT based XMPP client (basic version) has been
upgraded from 1.4.554 to 1.4.1456
Prosŏdy, a lightweight extensible XMPP server has been
upgraded from 0.12.3 to 13.0.1
Admins can disable and enable accounts as needed. A new
role and permissions framework. Storage and performance improvements.
libstrophe, an XMPP library in C has been upgraded from 0.12.2 to
0.14.0
It now supports XEP-0138: Stream Compression and
adds various modern SCRAM mechanisms.
omemo-dr, an OMEMO library used by Gajim is now in
Debian, in version 1.0.1
python-nbxmpp, a non blocking Jabber/XMPP Python 3 library, upgrade
from 4.2.2 to 6.1.1
python-oldmemo, a python-omemo backend for OMEMO 1, 1.0.3 to 1.1.0
python-omemo, a Python 3 implementation of the OMEMO protocol, 1.0.2
to 1.2.0
python-twomemo, a python-omemo backend for OMEMO 2, 1.0.3 to 1.1.0
strophejs, a library for writing XMPP clients has been upgraded from
1.2.14 to 3.1.0
Gateways/Transports
Biboumi, a gateway between XMPP and IRC, upgrades from
9.0 to 9.0+20241124.
Debian 13 Trixie includes Slidge 0.2.12 and
Matridge 0.2.3 for the first time! It is a
gateway between XMPP and Matrix, with support for many chat
features.
Not in Trixie
Spectrum 2, a gateway from XMPP to various other
messaging systems, did not make it into Debian 13, because it
depends on Swift, which has release critical bugs and
therefore cannot be part of a stable release.
I’ve been part of the Debian Project since 2019, when I attended DebConf held in Curitiba, Brazil. That event sparked my interest in the community, packaging, and how Debian works as a distribution.
In the early years of my involvement, I contributed to various teams such as the Python, Golang and Cloud teams, packaging dependencies and maintaining various tools. However, I soon felt the need to focus on packaging software I truly enjoyed, tools I was passionate about using and maintaining.
That’s when I turned my attention to Kubernetes within Debian.
A Broken Ecosystem
The Kubernetes packaging situation in Debian had been problematic for some time. Given its large codebase and complex dependency tree, the initial packaging approach involved vendorizing all dependencies. While this allowed a somewhat functional package to be published, it introduced several long-term issues, especially security concerns.
Vendorized packages bundle third-party dependencies directly into the source tarball. When vulnerabilities arise in those dependencies, it becomes difficult for Debian’s security team to patch and rebuild affected packages system-wide. This approach broke Debian’s best practices, and it eventually led to the abandonment of the Kubernetes source package, which had stalled at version 1.20.5.
Due to this abandonment, critical bugs emerged and the package was removed from Debian’s testing channel, as we can see in the package tracker.
New Debian Kubernetes Team
Around this time, I became a Debian Maintainer (DM), with permissions to upload certain packages. I saw an opportunity to both contribute more deeply to Debian and to fix Kubernetes packaging.
In early 2024, just before DebConf Busan in South Korea, I founded the Debian Kubernetes Team. The mission of the team was to repackage Kubernetes in a maintainable, security-conscious, and Debian-compliant way. At DebConf, I shared our progress with the broader community and received great feedback and more visibility, along with people interested in contributing to the team.
Our first tasks was to migrate existing Kubernetes-related tools such as kubectx, kubernetes-split-yaml and kubetail into a dedicated namespace on Salsa, Debian’s GitLab instance.
Many of these tools were stored across different teams (like the Go team), and consolidating them helped us organize development and focus our efforts.
De-vendorizing Kubernetes
Our main goal was to un-vendorize Kubernetes and bring it up-to-date with upstream releases.
This meant:
Removing the vendor directory and all embedded third-party code.
Trimming the build scope to focus solely on building kubectl, Kubernetes’ CLI.
Using Files-Excluded in debian/copyright to cleanly drop unneeded files during source imports.
Rebuilding the dependency tree, ensuring all Go modules were separately packaged in Debian.
We used uscan, a standard Debian packaging tool that fetches upstream tarballs and prepares them accordingly. The Files-Excluded directive in our debian/copyright file instructed uscan to automatically remove unnecessary files during the repackaging process:
$ uscan
Newest version of kubernetes on remote site is 1.32.3, specified download version is 1.32.3
Successfully repacked ../v1.32.3 as ../kubernetes_1.32.3+ds.orig.tar.gz, deleting 30616 files from it.
The results were dramatic. By comparing the original upstream tarball with our repackaged version, we can see that our approach reduced the tarball size by over 75%:
This significant reduction wasn’t just about saving space. By removing over 30,000 files, we simplified the package, making it more maintainable. Each dependency could now be properly tracked, updated, and patched independently, resolving the security concerns that had plagued the previous packaging approach.
Dependency Graph
To give you an idea of the complexity involved in packaging Kubernetes for Debian, the image below is a dependency graph generated with debtree, visualizing all the Go modules and other dependencies required to build the kubectl binary.
This web of nodes and edges represents every module and its relationship during the compilation process of kubectl. Each box is a Debian package, and the lines connecting them show how deeply intertwined the ecosystem is. What might look like a mess of blue spaghetti is actually a clear demonstration of the vast and interconnected upstream world that tools like kubectl rely on.
But more importantly, this graph is a testament to the effort that went into making kubectl build entirely using Debian-packaged dependencies only, no vendoring, no downloading from the internet, no proprietary blobs.
Upstream Version 1.32.3 and Beyond
After nearly two years of work, we successfully uploaded version 1.32.3+ds of kubectl to Debian unstable.
Zsh, Fish, and Bash completions installed automatically
Man pages and metadata for improved discoverability
Full integration with kind and docker for testing purposes
Integration Testing with Autopkgtest
To ensure the reliability of kubectl in real-world scenarios, we developed a new autopkgtest suite that runs integration tests using real Kubernetes clusters created via Kind.
Autopkgtest is a Debian tool used to run automated tests on binary packages. These tests are executed after the package is built but before it’s accepted into the Debian archive, helping catch regressions and integration issues early in the packaging pipeline.
Our test workflow validates kubectl by performing the following steps:
Installing Kind and Docker as test dependencies.
Spinning up two local Kubernetes clusters.
Switching between cluster contexts to ensure multi-cluster support.
Deploying and scaling a sample nginx application using kubectl.
Cleaning up the entire test environment to avoid side effects.
To measure real-world usage, we rely on data from Debian’s popularity contest (popcon), which gives insight into how many users have each binary installed.
Here’s what the data tells us:
kubectl (new binary): Already installed on 2,124 systems.
golang-k8s-kubectl-dev: This is the Go development package (a library), useful for other packages and developers who want to interact with Kubernetes programmatically.
kubernetes-client: The legacy package that kubectl is replacing. We expect this number to decrease in future releases as more systems transition to the new package.
Although the popcon data shows activity for kubectl before the official Debian upload date, it’s important to note that those numbers represent users who had it installed from upstream source-lists, not from the Debian repositories. This distinction underscores a demand that existed even before the package was available in Debian proper, and it validates the importance of bringing it into the archive.
Also worth mentioning: this number is not the real total number of installations, since users can choose not to participate in the popularity contest. So the actual adoption is likely higher than what popcon reflects.
Community and Documentation
The team also maintains a dedicated wiki page which documents:
The next stable release of Debian will ship with kubectl version 1.32.3, built from a clean, de-vendorized source. This version includes nearly all the latest upstream features, and will be the first time in years that Debian users can rely on an up-to-date, policy-compliant kubectl directly from the archive.
By comparing with upstream, our Debian package even delivers more out of the box, including shell completions, which the upstream still requires users to generate manually.
In 2025, the Debian Kubernetes team will continue expanding our packaging efforts for the Kubernetes ecosystem.
Our roadmap includes:
kubelet: The primary node agent that runs on each node. This will enable Debian users to create fully functional Kubernetes nodes without relying on external packages.
kubeadm: A tool for creating Kubernetes clusters. With kubeadm in Debian, users will then be able to bootstrap minimum viable clusters directly from the official repositories.
helm: The package manager for Kubernetes that helps manage applications through Kubernetes YAML files defined as charts.
kompose: A conversion tool that helps users familiar with docker-compose move to Kubernetes by translating Docker Compose files into Kubernetes resources.
Final Thoughts
This journey was only possible thanks to the amazing support of the debian-devel-br community and the collective effort of contributors who stepped up to package missing dependencies, fix bugs, and test new versions.
Special thanks to:
Carlos Henrique Melara (@charles)
Guilherme Puida (@puida)
João Pedro Nobrega (@jnpf)
Lucas Kanashiro (@kanashiro)
Matheus Polkorny (@polkorny)
Samuel Henrique (@samueloph)
Sergio Cipriano (@cipriano)
Sergio Durigan Junior (@sergiodj)
I look forward to continuing this work, bringing more Kubernetes tools into Debian and improving the developer experience for everyone.
I’ve been part of the Debian Project since 2019, when I attended DebConf held in Curitiba, Brazil. That event sparked my interest in the community, packaging, and how Debian works as a distribution.
In the early years of my involvement, I contributed to various teams such as the Python, Golang and Cloud teams, packaging dependencies and maintaining various tools. However, I soon felt the need to focus on packaging software I truly enjoyed, tools I was passionate about using and maintaining.
That’s when I turned my attention to Kubernetes within Debian.
A Broken Ecosystem
The Kubernetes packaging situation in Debian had been problematic for some time. Given its large codebase and complex dependency tree, the initial packaging approach involved vendorizing all dependencies. While this allowed a somewhat functional package to be published, it introduced several long-term issues, especially security concerns.
Vendorized packages bundle third-party dependencies directly into the source tarball. When vulnerabilities arise in those dependencies, it becomes difficult for Debian’s security team to patch and rebuild affected packages system-wide. This approach broke Debian’s best practices, and it eventually led to the abandonment of the Kubernetes source package, which had stalled at version 1.20.5.
Due to this abandonment, critical bugs emerged and the package was removed from Debian’s testing channel, as we can see in the package tracker.
New Debian Kubernetes Team
Around this time, I became a Debian Maintainer (DM), with permissions to upload certain packages. I saw an opportunity to both contribute more deeply to Debian and to fix Kubernetes packaging.
In early 2024, just before DebConf Busan in South Korea, I founded the Debian Kubernetes Team. The mission of the team was to repackage Kubernetes in a maintainable, security-conscious, and Debian-compliant way. At DebConf, I shared our progress with the broader community and received great feedback and more visibility, along with people interested in contributing to the team.
Our first tasks was to migrate existing Kubernetes-related tools such as kubectx, kubernetes-split-yaml and kubetail into a dedicated namespace on Salsa, Debian’s GitLab instance.
Many of these tools were stored across different teams (like the Go team), and consolidating them helped us organize development and focus our efforts.
De-vendorizing Kubernetes
Our main goal was to un-vendorize Kubernetes and bring it up-to-date with upstream releases.
This meant:
Removing the vendor directory and all embedded third-party code.
Trimming the build scope to focus solely on building kubectl, Kubernetes’ CLI.
Using Files-Excluded in debian/copyright to cleanly drop unneeded files during source imports.
Rebuilding the dependency tree, ensuring all Go modules were separately packaged in Debian.
We used uscan, a standard Debian packaging tool that fetches upstream tarballs and prepares them accordingly. The Files-Excluded directive in our debian/copyright file instructed uscan to automatically remove unnecessary files during the repackaging process:
$ uscan
Newest version of kubernetes on remote site is 1.32.3, specified download version is 1.32.3
Successfully repacked ../v1.32.3 as ../kubernetes_1.32.3+ds.orig.tar.gz, deleting 30616 files from it.
The results were dramatic. By comparing the original upstream tarball with our repackaged version, we can see that our approach reduced the tarball size by over 75%:
This significant reduction wasn’t just about saving space. By removing over 30,000 files, we simplified the package, making it more maintainable. Each dependency could now be properly tracked, updated, and patched independently, resolving the security concerns that had plagued the previous packaging approach.
Dependency Graph
To give you an idea of the complexity involved in packaging Kubernetes for Debian, the image below is a dependency graph generated with debtree, visualizing all the Go modules and other dependencies required to build the kubectl binary.
This web of nodes and edges represents every module and its relationship during the compilation process of kubectl. Each box is a Debian package, and the lines connecting them show how deeply intertwined the ecosystem is. What might look like a mess of blue spaghetti is actually a clear demonstration of the vast and interconnected upstream world that tools like kubectl rely on.
But more importantly, this graph is a testament to the effort that went into making kubectl build entirely using Debian-packaged dependencies only, no vendoring, no downloading from the internet, no proprietary blobs.
Upstream Version 1.32.3 and Beyond
After nearly two years of work, we successfully uploaded version 1.32.3+ds of kubectl to Debian unstable.
Zsh, Fish, and Bash completions installed automatically
Man pages and metadata for improved discoverability
Full integration with kind and docker for testing purposes
Integration Testing with Autopkgtest
To ensure the reliability of kubectl in real-world scenarios, we developed a new autopkgtest suite that runs integration tests using real Kubernetes clusters created via Kind.
Autopkgtest is a Debian tool used to run automated tests on binary packages. These tests are executed after the package is built but before it’s accepted into the Debian archive, helping catch regressions and integration issues early in the packaging pipeline.
Our test workflow validates kubectl by performing the following steps:
Installing Kind and Docker as test dependencies.
Spinning up two local Kubernetes clusters.
Switching between cluster contexts to ensure multi-cluster support.
Deploying and scaling a sample nginx application using kubectl.
Cleaning up the entire test environment to avoid side effects.
To measure real-world usage, we rely on data from Debian’s popularity contest (popcon), which gives insight into how many users have each binary installed.
Here’s what the data tells us:
kubectl (new binary): Already installed on 2,124 systems.
golang-k8s-kubectl-dev: This is the Go development package (a library), useful for other packages and developers who want to interact with Kubernetes programmatically.
kubernetes-client: The legacy package that kubectl is replacing. We expect this number to decrease in future releases as more systems transition to the new package.
Although the popcon data shows activity for kubectl before the official Debian upload date, it’s important to note that those numbers represent users who had it installed from upstream source-lists, not from the Debian repositories. This distinction underscores a demand that existed even before the package was available in Debian proper, and it validates the importance of bringing it into the archive.
Also worth mentioning: this number is not the real total number of installations, since users can choose not to participate in the popularity contest. So the actual adoption is likely higher than what popcon reflects.
Community and Documentation
The team also maintains a dedicated wiki page which documents:
The next stable release of Debian will ship with kubectl version 1.32.3, built from a clean, de-vendorized source. This version includes nearly all the latest upstream features, and will be the first time in years that Debian users can rely on an up-to-date, policy-compliant kubectl directly from the archive.
By comparing with upstream, our Debian package even delivers more out of the box, including shell completions, which the upstream still requires users to generate manually.
In 2025, the Debian Kubernetes team will continue expanding our packaging efforts for the Kubernetes ecosystem.
Our roadmap includes:
kubelet: The primary node agent that runs on each node. This will enable Debian users to create fully functional Kubernetes nodes without relying on external packages.
kubeadm: A tool for creating Kubernetes clusters. With kubeadm in Debian, users will then be able to bootstrap minimum viable clusters directly from the official repositories.
helm: The package manager for Kubernetes that helps manage applications through Kubernetes YAML files defined as charts.
kompose: A conversion tool that helps users familiar with docker-compose move to Kubernetes by translating Docker Compose files into Kubernetes resources.
Final Thoughts
This journey was only possible thanks to the amazing support of the debian-devel-br community and the collective effort of contributors who stepped up to package missing dependencies, fix bugs, and test new versions.
Special thanks to:
Carlos Henrique Melara (@charles)
Guilherme Puida (@puida)
João Pedro Nobrega (@jnpf)
Lucas Kanashiro (@kanashiro)
Matheus Polkorny (@polkorny)
Samuel Henrique (@samueloph)
Sergio Cipriano (@cipriano)
Sergio Durigan Junior (@sergiodj)
I look forward to continuing this work, bringing more Kubernetes tools into Debian and improving the developer experience for everyone.
I've been working on a multi-label email classification model.
It's been a frustrating slog, fraught with challenges, including
a lack of training data. Labeling emails is labor-intensive and
error-prone. Also, I habitually delete certain classes of email
immediately after its usefulness has been reduced. I use a
CRM-114-based spam filtering system (actually I use two
different isntances of the same mailreaver config, but that's
another story), which is differently frustrating, but I
delete spam when it's detected or when it's trained.
Fortunately, there's no shortage of incoming spam, so I can
collect enough, but for other, arguably more important labels,
they arrive infrequently. So, those labels need to be excluded,
or the small sample sizes wreck the training feedback loop.
Currently, I have ten active labels, and even though the point
of this is not to be a spam filter, “spam” is one of the labels.
Out of curiosity, I decided to compare the performance of
my three different models, and to do so on a neutral corpus
(in other words, emails that none of them had ever been
trained on). I grabbed the full TREC 2007 corpus and ran
inference. The results were unexpected in many ways. For
example, the Pearson correlation coefficient between my
older CRM-114 model and my newer CRM-114 was only about
0.78.
I was even more surprised by how poorly all three performed.
Were they overfit to my email? So, I decided to look at
the TREC corpus for the first time, and lo and behold, the
first spam-labeled email I checked was something I would
definitely train all three models with as non-spam, but
ham for CRM-114 and an entirely different label for my
experimental model.
I've been refreshing myself on the low-level guts of Linux
container technology. Here's some notes on mount namespaces.
In the below examples, I will use more than one root shell
simultaneously. To disambiguate them, the examples will feature
a numbered shell prompt: 1# for the first shell, and 2# for
the second.
Preliminaries
Namespaces are normally associated with processes and are
removed when the last associated process terminates. To make
them persistent, you have to bind-mount the corresponding
virtual file from an associated processes's entry in /proc,
to another path1.
The receiving path needs to have its "propogation" property set to "private".
Most likely your system's existing mounts are mostly "public". You can check
the propogation setting for mounts with
1# findmnt -o+PROPAGATION
We'll create a new directory to hold mount namespaces we create,
and set its Propagation to private, via a bind-mount of itself
to itself.
1# mkdir /root/mntns
1# mount --bind --make-private /root/mntns /root/mntns
The namespace itself needs to be bind-mounted over a file rather
than a directory, so we'll create one.
1# touch /root/mntns/1
Creating and persisting a new mount namespace
1# unshare --mount=/root/mntns/1
We are now 'inside' the new namespace in a new shell process.
We'll change the shell prompt to make this clearer
PS1='inside# '
We can make a filesystem change, such as mounting a tmpfs
inside# mount -t tmpfs /mnt /mnt
inside# touch /mnt/hi-there
And observe it is not visible outside that namespace
2# findmnt /mnt
2# stat /mnt/hi-there
stat: cannot statx '/mnt/hi-there': No such file or directory
Back to the namespace shell, we can find an integer identifier for
the namespace via the shell processes /proc entry:
inside# readlink /proc/$$/ns/mnt
It will be something like mnt:[4026533646].
From another shell, we can list namespaces and see that it
exists:
2# lsns -t mnt
NS TYPE NPROCS PID USER COMMAND
…
4026533646 mnt 1 52525 root -bash
If we exit the shell that unshare created,
inside# exit
running lsns again should2 still list the namespace,
albeit with the NPROCS column now reading 0.
2# lsns -t mnt
We can see that a virtual filesystem of type nsfs is mounted at
the path we selected when we ran unshare:
Authorities in Pakistan have arrested 21 individuals accused of operating “Heartsender,” a once popular spam and malware dissemination service that operated for more than a decade. The main clientele for HeartSender were organized crime groups that tried to trick victim companies into making payments to a third party, and its alleged proprietors were publicly identified by KrebsOnSecurity in 2021 after they inadvertently infected their computers with malware.
Some of the core developers and sellers of Heartsender posing at a work outing in 2021. WeCodeSolutions boss Rameez Shahzad (in sunglasses) is in the center of this group photo, which was posted by employee Burhan Ul Haq, pictured just to the right of Shahzad.
A report from the Pakistani media outlet Dawn states that authorities there arrested 21 people alleged to have operated Heartsender, a spam delivery service whose homepage openly advertised phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me. Pakistan’s National Cyber Crime Investigation Agency (NCCIA) reportedly conducted raids in Lahore’s Bahria Town and Multan on May 15 and 16.
The NCCIA told reporters the group’s tools were connected to more than $50m in losses in the United States alone, with European authorities investigating 63 additional cases.
“This wasn’t just a scam operation – it was essentially a cybercrime university that empowered fraudsters globally,” NCCIA Director Abdul Ghaffar said at a press briefing.
In January 2025, the FBI and the Dutch Police seized the technical infrastructure for the cybercrime service, which was marketed under the brands Heartsender, Fudpage and Fudtools (and many other “fud” variations). The “fud” bit stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.
The FBI says transnational organized crime groups that purchased these services primarily used them to run business email compromise (BEC) schemes, wherein the cybercrime actors tricked victim companies into making payments to a third party.
Dawn reported that those arrested included Rameez Shahzad, the alleged ringleader of the Heartsender cybercrime business, which most recently operated under the Pakistani front company WeCodeSolutions. Mr. Shahzad was named and pictured in a 2021 KrebsOnSecurity story about a series of remarkable operational security mistakes that exposed their identities and Facebook pages showing employees posing for group photos and socializing at work-related outings.
Prior to folding their operations behind WeCodeSolutions, Shahzad and others arrested this month operated as a web hosting group calling itself The Manipulaters. KrebsOnSecurity first wrote about The Manipulaters in May 2015, mainly because their ads at the time were blanketing a number of popular cybercrime forums, and because they were fairly open and brazen about what they were doing — even who they were in real life.
Sometime in 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that specializes in connecting cybercriminals to their real-life identities. Soon after, Scylla started receiving large amounts of email correspondence intended for the group’s owners.
In 2024, DomainTools.comfound the web-hosted version of Heartsender leaked an extraordinary amount of user information to unauthenticated users, including customer credentials and email records from Heartsender employees. DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”
Shahzad allegedly used the alias “Saim Raza,” an identity which has contacted KrebsOnSecurity multiple times over the past decade with demands to remove stories published about the group. The Saim Raza identity most recently contacted this author in November 2024, asserting they had quit the cybercrime industry and turned over a new leaf after a brush with the Pakistani police.
The arrested suspects include Rameez Shahzad, Muhammad Aslam (Rameez’s father), Atif Hussain, Muhammad Umar Irshad, Yasir Ali, Syed Saim Ali Shah, Muhammad Nowsherwan, Burhanul Haq, Adnan Munawar, Abdul Moiz, Hussnain Haider, Bilal Ahmad, Dilbar Hussain, Muhammad Adeel Akram, Awais Rasool, Usama Farooq, Usama Mehmood and Hamad Nawaz.
The only links are from The Daily Mail and The Mirror, but a marital affair was discovered because the cheater was recorded using his smart toothbrush at home when he was supposed to be at work.
As a small addendum to the last post, here are the relevant
commands #debci helpfully provided.
First, you need to install the autopkgtest package,
obviously:
# apt install autopkgtest
Then you need to create a Debian virtual machine to run the
tests (put the sid.raw wherever you prefer):
# autopkgtest-build-qemu sid /tmp/sid.raw
Then you can run the tests themselves, using the just created
virtual machine. The autopkgtest command can use the tests from
various sources, using the last argument to the command. In my case
what was the most helpful was to run the tests from my git clone
(which uses gbp) so I could edit the tests directly. So I didn't
give anything for testsrc (but
. would work as well I guess).
We are very excited to announce that Debian has selected nine contributors to
work under mentorship on a variety of
projects with us during the
Google Summer of Code.
Here is a list of the projects and students, along with details of the tasks to
be performed.
Deliverables of the project: Continuous integration tests for Debian Med
applications lacking a test, Quality Assurance review and bug fixing if issues
might be uncovered.
Deliverables of the project: Analysis and discussion of the current
state of device tweaks management in Debian and Mobian. Proposal for a
unified, run-time approach. Packaging of this service and tweaks
data/configuration for at least one device.
Deliverables of the project: New Debian packages with GPU
support. Enhanced GPU support within existing Debian packages.
More autopackagetests running on the Debian ROCm CI.
Deliverables of the project: Refreshing the set of daily-built
images. Having the set of daily-built images become automatic
again—that is, go back to the promise of having it daily-built.
Write an Ansible playbook/Chef recipe/Puppet whatsitsname to define a
virtual serve and have it build daily. Do the (very basic!) hardware
testing on several Raspberry computers. Do note, naturally, this will
require having access to the relevant hardware.
Deliverables of the project: Eventually I hope we can make vLLM into
Debian archive, based on which we can deliver something for LLM
inference out-of-the-box. If the amount of work eventually turns to be
beyond my expectation, I'm still happy to see how far we can go
towards this goal. If the amount of work required for vLLM is less
than I expected, we can also look at something else like SGLang,
another open source LLM inference library.
Congratulations and welcome to all the contributors!
The Google Summer of Code program is possible in Debian thanks to the efforts of
Debian Developers and Debian Contributors that dedicate part of their free time
to mentor contributors and outreach tasks.
Join us and help extend Debian! You can follow the contributors' weekly reports
on the debian-outreach mailing-list, chat with us on our
IRC channel or reach out to the individual projects' team
mailing lists.
Each year on August the 16th, we celebrate the Debian Project Anniversary.
Several communities around the world join us in celebrating "Debian Day" with
local events, parties, or gatherings.
So, how about celebrating the 32nd anniversary of the Debian Project in 2025 in
your city? As the 16th of August falls on a Saturday this year, we believe it
is great timing to gather people around your event.
We invite you and your local community to organize a Debian Day by hosting an
event with talks, workshops, a
bug squashing party, or
OpenPGP keysigning gathering, etc.
You could also hold a meeting with others in the Debian community in a smaller
social setting like a bar/pizzeria/cafeteria/restaurant to celebrate. In other
words, any type of celebrating is valid!
Many nations have some form of national identification number, especially around taxes. Argentina is no exception.
Their "CUIT" (Clave Única de Identificación Tributaria) and "CUIL" (Código Único de Identificación Laboral) are formatted as "##-########-#".
Now, as datasets often don't store things in their canonical representation, Nick's co-worker was given a task: "given a list of numbers, reformat them to look like CUIT/CUIL. That co-worker went off for five days, and produced this Java function.
public String normalizarCuitCuil(String cuitCuilOrigen){
StringvalorNormalizado=newString();
if (cuitCuilOrigen == null || "".equals(cuitCuilOrigen) || cuitCuilOrigen.length() < MINIMA_CANTIDAD_ACEPTADA_DE_CARACTERES_PARA_NORMALIZAR){
valorNormalizado = "";
}else{
StringBuildernumerosDelCuitCuil=newStringBuilder(13);
cuitCuilOrigen = cuitCuilOrigen.trim();
// Se obtienen solo los números:MatcherbuscadorDePatron= patternNumeros.matcher(cuitCuilOrigen);
while (buscadorDePatron.find()){
numerosDelCuitCuil.append(buscadorDePatron.group());
}
// Se le agregan los guiones:
valorNormalizado = numerosDelCuitCuil.toString().substring(0,2)
+ "-"
+ numerosDelCuitCuil.toString().substring(2,numerosDelCuitCuil.toString().length()-1)
+ "-"
+ numerosDelCuitCuil.toString().substring(numerosDelCuitCuil.toString().length()-1, numerosDelCuitCuil.toString().length());
}
return valorNormalizado;
}
We start with a basic sanity check that the string exists and is long enough. If it isn't, we return an empty string, which already annoys me, because an empty result is not a good way to communicate "I failed to parse".
But assuming we have data, we construct a string builder and trim whitespace. And already we have a problem: we already validated that the string was long enough, but if the string contained more trailing whitespace than a newline, we're looking at a problem. Now, maybe we can assume the data is good, but the next line implies that we can't rely on that- they create a regex matcher to identify numeric values, and for each numeric value they find, they append it to our StringBuilder. This implies that the string may contain non-numeric values which need to be rejected, which means our length validation was still wrong.
So either the data is clean and we're overvalidating, or the data is dirty and we're validating in the wrong order.
But all of that's a preamble to a terrible abuse of string builders, where they discard all the advantages of using a StringBuilder by calling toString again and again and again. Now, maybe the function caches results or the compiler can optimize it, but the result is a particularly unreadable blob of slicing code.
Now, this is ugly, but at least it works, assuming the input data is good. It definitely should never pass a code review, but it's not the kind of bad code that leaves one waking up in the middle of the night in a cold sweat.
No, what gets me about this is that it took five days to write. And according to Nick, the responsible developer wasn't just slacking off or going to meetings the whole time, they were at their desk poking at their Java IDE and looking confused for all five days.
And of course, because it took so long to write the feature, management didn't want to waste more time on kicking it back via a code review. So voila: it got forced through and released to production since it passed testing.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Majoki Standing among some of the oldest living things on earth, Mourad Du, felt his age. Not just in years, but in possibilities lost. And, now, the impossibility he faced. Who could he tell? Would it even matter? They would all be gone soon. Nothing he could do, we could do, would change that. […]
The Long Now Foundation is proud to announce Christopher Michel as Long Now Artist-in-Residence. A distinguished photographer and visual storyteller, Michel has documented Long Now’s founders and visionaries — including Stewart Brand, Kevin Kelly, Danny Hillis, Esther Dyson, and many of its board members and speakers — for decades. Through his portrait photographs, he has captured their work in long-term thinking, deep time, and the future of civilization.
As Long Now Artist-in-Residence, Michel will create a body of work inspired by the Foundation’s mission, expanding his exploration of time through portraiture, documentary photography, and large-scale visual projects. His work will focus on artifacts of long-term thinking, from the 10,000-year clock to the Rosetta Project, as well as the people shaping humanity’s long-term future.
Christopher Michel has made photographs of Long Now Board Members past and present. Clockwise from upper left: Stewart Brand, Danny Hillis, Kevin Kelly, Alexander Rose, Katherine Fulton, David Eagleman, Esther Dyson, and Danica Remy.
Michel will hold this appointment concurrently with his Artist-in-Residence position at the National Academies of Sciences, Engineering, and Medicine, where he uses photography to highlight the work of leading scientists, engineers, and medical professionals. His New Heroes project — featuring over 250 portraits of leaders in science, engineering, and medicine — aims to elevate science in society and humanize these fields. His images, taken in laboratories, underground research facilities, and atop observatories scanning the cosmos, showcase the individuals behind groundbreaking discoveries. In 02024, his portrait of Dr. Anthony Fauci was featured on the cover of Fauci’s memoir On Call.
💡
View more of Christopher Michel's photography, from portraits of world-renowned scientists to some of our planet's most incredible natural landscapes, at his website and on his Instagram account.
A former U.S. Navy officer and entrepreneur, Michel founded two technology companies before dedicating himself fully to photography. His work has taken him across all seven continents, aboard a U-2 spy plane, and into some of the most extreme environments on Earth. His images are widely published, appearing in major publications, album covers, and even as Google screensavers.
“What I love about Chris and the images he’s able to create is that at the deepest level they are really intended for the long now — for capturing this moment within the broader context of past and future,” said Long Now Executive Director Rebecca Lendl.
“These timeless, historic images help lift up the heroes of our times, helping us better understand who and what we are all about, reflecting back to us new stories about ourselves as a species.”
Michel’s photography explores the intersection of humanity and time, capturing the fragility and resilience of civilization. His work spans the most remote corners of the world — from Antarctica to the deep sea to the stratosphere — revealing landscapes and individuals who embody the vastness of time and space. His images — of explorers, scientists, and technological artifacts — meditate on humanity’s place in history.
Christopher Michel’s life in photography has taken him to all seven continents and beyond. Photos by Christopher Michel.
Michel’s photography at Long Now will serve as a visual bridge between the present and the far future, reinforcing the Foundation’s mission to foster long-term responsibility. “Photography,” Michel notes, “is a way of compressing time — capturing a fleeting moment that, paradoxically, can endure for centuries. At Long Now, I hope to create images that don’t just document the present but invite us to think in terms of deep time.”
His residency will officially begin this year, with projects unfolding over the coming months. His work will be featured in Long Now’s public programming, exhibitions, and archives, offering a new visual language for the Foundation’s mission to expand human timescales and inspire long-term thinking.
In advance of his appointment as Long Now’s inaugural Artist-in-Residence, Long Now’s Jacob Kuppermann had the opportunity to catch up with Christopher Michel and discuss his journey and artistic perspective.
This interview has been edited for length and clarity.
Long Now: For those of us less familiar with your work: tell us about your journey both as an artist and a photographer — and how you ended up involved with Long Now.
Christopher Michel: In a way, it’s the most unlikely journey. Of all the things I could have imagined I would've done as a kid, I am as far away from any of that as I could have imagined. My path, in most traditional ways of looking at it, is quite non-linear. Maybe the points of connectivity seem unclear, but some people smarter than me have seen connections and have made observations about how they may be related.
When I was growing up, I was an outsider, and I was interested in computers. I was programming in the late seventies, which was pretty early. My first computer was a Sinclair ZX 80, and then I went to college at the University of Illinois and Top Gun had just come out. And I thought, “maybe I want to serve my country.” And so I flew for the Navy as a navigator and mission commander — kind of like Goose — and hunted Russian submarines. I had a great time, I was always interested in computers, not taking any photographs really, which is such a regret. Imagine — flying 200 feet above the water for eight hours at a time, hunting drug runners or Russian subs, doing amazing stuff with amazing people. It just never occurred to me to take any photos. I was just busy doing my job. And then I went to work in the Pentagon in the office of the Chief of Naval Operations for the head of the Navy Reserve — I went to work for the bosses of the Navy.
If you'd asked me what I wanted to do, I guess I would've said, I think I want to go into politics and maybe go to law school. I'd seen the movie The Paper Chase, and I love the idea of the Socratic method. And then the Navy said, well, we’re going to send you to the Kennedy School, which is their school around public service. But then I ran into somebody at the Pentagon who said, "You should really go to Harvard Business School."
And I hadn't thought about it — I was never interested in business. He said that it's a really good degree because you can do lots of things with it. So I quit my job in the Navy and literally a week later I was living in Boston for my first day at Harvard Business School. It was a big eye-opening experience because I had lived in a kind of isolated world in the Navy — I only knew Navy people.
This was also a little bit before the pervasiveness of the internet. This is 01997 or 01996. People didn't know as much as they know now, and certainly entrepreneurship was not a thing in the same way that it was after people like Mark Zuckerberg. If you'd asked me what I wanted to do at Harvard, I would've said something relating to defense or operations. But then, I ran into this guy Dan Bricklin, who created VisiCalc. VisiCalc was one of the first three applications that drove the adoption of the personal computer. When we bought computers like the TRS 80 and Apple II in 01979 or 01980 we bought them for the Colossal Cave Adventure game, for VisiCalc, and for WordStar.
He gave a talk to our class and he said, "When I look back at my life, I feel like I created something that made a difference in the world." And it really got my attention, that idea of building something that can outlast us that's meaningful, and can be done through entrepreneurship. Before that, that was an idea that was just not part of the culture I knew anything about. So I got excited to do that. When I left Harvard Business School, I was still in the reserves and I had the idea that the internet would be a great way to connect, enable, and empower service members, veterans, and their families. So I helped start a company called Military.com, and it was one of the first social media companies to get to scale in the United States. And its concept may sound like a very obvious idea today because we live in a world where we know about things like Facebook, but this was five years before Facebook was created.
I raised a lot of money, and then I got fired, and then I came back and it was a really difficult time because that was also during the dot-com bubble bursting.
But during that time period, two interesting other things happened. The first: my good friend Ann Dwane gave me a camera. When I was driving cross country to come out here to find my fortune in Silicon Valley, I started taking some photos and film pictures and I thought, hey, my pictures are pretty good — this is pretty fun. And then I bought another camera, and I started taking more pictures and I was really hooked.
The second is actually my first connection to Long Now. What happened was that a guy came to visit me when I was running Military.com that I didn't know anything about — I'm not even sure why he came to see me. And he was a guy named Stewart Brand. So Stewart shows up in my office — he must've been introduced to me by someone, but I just didn't really have the context. Maybe I looked him up and had heard of the Whole Earth Catalog.
Anyways, he just had a lot of questions for me. And, of course, what I didn't realize, 25 years ago, is that this is what Stewart does. Stewart — somehow, like a time traveler — finds his way to the point of creation of something that he thinks might be important. He's almost a Zelig character. He just appears in my office and is curious about social media and the military. He served in the military, himself, too.
So I meet him and then he leaves and I don't hear anything of him for a while. Then, we have an idea of a product called Kit Up at Military.com. And the idea of Kit Up was based on Kevin Kelly’s Cool Tools — military people love gear, and we thought, well, what if we did a weekly gear thing? And I met with Kevin and I said, “Hey Kevin, what do you think about me kind of taking your idea and adapting it for the military?”
Of course, Kevin's the least competitive person ever, and he says, “Great idea!” So we created Kit Up, which was a listing of, for example, here are the best boots for military people, or the best jacket or the best gloves — whatever it might be.
As a byproduct of running these companies, I got exposed to Kevin and Stewart, and then I became better friends with Kevin. I got invited to some walks, and I would see those guys socially. And then in 02014, The Interval was created. I went there and I got to know Zander and I started making a lot of photos — have you seen my gallery of Long Now photos?
Christopher Michel's photos of the 10,000-year clock in Texas capture its scale and human context.
I've definitely seen it — whenever I need a photo of a Long Now person, I say, “let me see if there's a good Chris photo.”
This is an interesting thing that can happen in our lives, which are unintended projects. I do make photos with the idea that these photos could be quite important. There's a kind of alchemy around these photos. They're important now, but they're just going to be more important over time.
So my pathway is: Navy, entrepreneur, investor for a little while, and then photographer.
If there was a theme connecting these, it’s that I'm curious. The thing that really excites me most of all is that I like new challenges. I like starting over. It feels good! Another theme is the act of creation. You may think photography is quite a bit different than creating internet products, but they have some similarities! A great portrait is a created thing that can last and live without my own keeping it alive. It has its own life force. And that's creation. Companies are like that. Some companies go away, some companies stay around. Military.com stays around today.
So I'm now a photographer and I'm going all over the world and I'm making all these photos and I'm leading trips. Zander and the rest of the team invite me to events. I go to the clock and I climb the Bay Bridge, and I visit Biosphere 2. I'm just photographing a lot of stuff and I love it. The thing I love about Long Now people is, if you like quirky, intellectual people, these are your people. You know what I mean? And they're all nice, wonderful humans. So it feels good to be there and to help them.
In 02022, Christopher Michel accompanied Long Now on a trip to Biosphere 2 in Arizona.
During the first Trump administration, during Covid, I was a volunteer assisting the U.S. National Academies with science communication. I was on the board of the Division of Earth and Life Studies, and I was on the President’s Circle.
Now, the National Academies — they were created by Abraham Lincoln to answer questions of science for the U.S. government, they have two primary functions. One is an honorific, there's three academies: sciences, engineering, and medicine. So if you're one of the best physicists, you get made a member of the National Academies. It's based on the Royal Society in England.
But moreover, what’s relevant to everyone is that the National Academies oversee the National Research Council, which provides independent, evidence-based advice on scientific and technical questions. When the government needs to understand something complex, like how much mercury should be allowed in drinking water, they often turn to the Academies. A panel of leading experts is assembled to review the research and produce a consensus report. These studies help ensure that policy decisions are guided by the best available science.
Over time, I had the opportunity to work closely with the people who support and guide that process. We spent time with scientists from across disciplines. Many of them are making quiet but profound contributions to society. Their names may not be well known, but their work touches everything from health to energy to climate.
In those conversations, a common feeling kept surfacing. We were lucky to know these people. And we wished more of the country did, too. There is no shortage of intelligence or integrity in the world of science. What we need is more visibility. More connection. More ways for people to see who these scientists are, what they care about, and how they think.
That is part of why I do the work I do. Helping to humanize science is not about celebrating intellect alone. It's about building trust. When people can see the care, the collaboration, and the honesty behind scientific work, they are more likely to trust its results. Not because they are told to, but because they understand the people behind it.
These scientists and people in medicine and engineering are working on behalf of society. A lot of scientists aren't there for financial gain or celebrity. They're doing the work with a purpose behind it. And we live in a culture today where we don't know who any of these people are. We know Fauci, we might know Carl Sagan, we know Einstein — but people run out of scientists to list after that.
It’s a flaw in the system. These are the new heroes that should be our role models — people that are giving back. The National Academies asked me to be their first artist-in-residence, and I've been doing it now for almost five years, and it's an unpaid job. I fly myself around the country and I make portraits of scientists, and we give away the portraits to organizations. And I've done 260 or so portraits now. If you go to Wikipedia and you look up many of the Nobel Laureates from the U.S., they're my photographs.
I would say science in that world has never been under greater threat than it is today. I don't know how much of a difference my portraits are making, but at least it's some effort that I can contribute. I do think that these scientists having great portraits helps people pay attention — we live in that kind of culture today. So that is something we can do to help elevate science and scientists and humanize science.
And simultaneously, I’m still here at Long Now with Kevin and Stewart, and when there's interesting people there, I make photos of the speakers. I've spent time during the leadership transition and gotten to know all those people. And we talked and asked, well, why don't we incorporate this into the organization?
In December 02024, Christopher Michel helped capture incoming Long Now Executive Director Rebecca Lendl and Long Now Board President Patrick Dowd at The Interval.
We share an interest around a lot of these themes, and we are also in the business of collecting interesting people that the world should know about, and many of them are their own kind of new heroes.
I was really struck by what you said about how much with a really successful company or any form of institution, but also, especially, a photograph, put into the right setting with the right infrastructure around it to keep it lasting, can really live on beyond you and live on beyond whoever you're depicting.
That feels in itself very Long Now. We don't often think about the photograph as a tool of long-term thinking, but in a way it really is.
Christopher Michel: Well, this transition that we're in right now at Long Now is important for a lot of reasons. One is that a lot of organizations don't withstand transitions well, but the second is, these are the founders. All of these people that we've been talking about, they're the founders and we know them. We are the generation that knows them.
We think they will be here forever, and it will always be this way, but that's not true. The truth is, it's only what we document today that will be remembered in the future. How do we want the future to think about our founders? How do we want to think about their ideas and how are they remembered? That applies to not just the older board members — that applies to Rebecca and Patrick and you and me and all of us. This is where people kind of understate the role of capturing these memories. My talk at the Long Now is about how memories are the currency of our lives. When you think about our lives, what is it that at your deathbed you're going to be thinking about? It'll be this collection of things that happen to you and some evaluation of do those things matter?
I think about this a lot. I have some training as a historian and a historical ecologist. When I was doing archival research, I read all these government reports and descriptions and travelers' journals, and I could kind of get it. But whenever you uncovered that one photograph that they took or the one good sketch they got, then suddenly it was as if a portal opened and you were 150 years into the past. I suddenly understood, for example, what people mean when they say that the San Francisco Bay was full of otters at that time, because that's so hard to grasp considering what the Bay is like now. The visual, even if it's just a drawing, but especially if it's a photograph, makes such a mark in our memory in a way that very few other things can.
And, perhaps tied to that: you have taken thousands upon thousands of photos. You've made so many of these. I've noticed that you say “make photos,” rather than “take photos.” Is that an intentional theoretical choice about your process?
Christopher Michel: Well, all photographers say that. What do you think the difference is?
Taking — or capturing — it's like the image is something that is out there and you are just grabbing it from the world, whereas “making” indicates that this is a very intentional artistic process, and there are choices being made throughout and an intentional work of construction happening.
Christopher Michel: You're basically right. What I tell my students is that photographers visualize the image that they want and then they go to create that image. You can take a really good photo if you're lucky. Stuff happens. You just see something, you just take it. But even in that case, I am trying to think about what I can do to make that photo better. I'm taking the time to do that. So that's the difference, really.
The portraits — those are fun. I'd rather be real with them, and that's what I loved about Sara [Imari Walker]. I mean, Sara brought her whole self to that photo shoot.
On the question of capturing scientists specifically: how do you go about that process? Some of these images are more standard portraits. Others, you have captured them in a context that looks more like the context that they work in.
Christopher Michel: I'm trying to shoot environmental portraits, so I often visit them at their lab or in their homes — and this is connected to what I’ve talked about before.
We've conflated celebrity with heroism. George Dyson said something to the effect of: “Some people are celebrities because they're interesting, and some people are interesting because they're celebrities.”
Christopher Michel's photography captures Long Now Talks speakers. Clockwise from top left: Sara Imari Walker, Benjamin Bratton, Kim Stanley Robinson, and Stephen Heintz.
I think that society would benefit from a deeper understanding of these people, these scientists, and what they're doing. Honestly, I think they're better role models. We love actors and we love sports stars, and those are wonderful professions. But, I don't know, shouldn't a Nobel laureate be at least as well known?
There’s something there also that relates to timescales. At Long Now, we have the Pace Layers concept. A lot of those celebrities, whether they're athletes or actors or musicians — they're all doing incredible things, but those are very fast things. Those are things that are easy to capture in a limited attention span. Whereas the work of a scientist, the work of an engineer, the work of someone working in medicine can be one of slow payoffs. You make a discovery in 02006, but it doesn't have a clear world-changing impact until 02025.
Christopher Michel: Every day in my job, I'm running into people that have absolutely changed the world. Katalin Karikó won the Nobel Prize, or Walter Alvarez — he’s the son of a Nobel laureate. He's the one who figured out it was an asteroid that killed the dinosaurs. Diane Havlir, at UCSF — she helped create the cocktail that saved the lives of people who have AIDS. Think about the long-term cascading effect of saving millions of HIV-positive lives.
I mean, is there a sports star that can say that? The impact there is transformational. Look at what George Church is doing — sometimes with Ryan Phelan. This is what they're doing, and they're changing every element of our world and society that we live in. In a way engineering the world that we live in today, helping us understand the world that we live in, but we don't observe it in the way that we observe the fastest pace layer.
We had a writer who wrote an incredible piece for us about changing ecological baselines called “Peering Into The Invisible Present.” The concept is that it's so hard to observe the rate at which ecological change happens in the present. It's so slow — it is hard to tell that change is happening at all. But then if you were frozen in when Military.com was founded in 01999, and then you were thawed in 02025, you would immediately notice all these things that were different. For those of us who live through it as it happens, it is harder to tell those changes are happening, whereas it's very easy to tell when LeBron James has done an incredible dunk.
Christopher Michel: One cool thing about having a gallery that's been around for 20 years is you look at those photos and you think, “Ah, the world looked a little different then.”
It looks kind of the same, too — similar and different. There's a certain archival practice to having all these there. Something I noticed is that many of your images are uploaded with Creative Commons licensing. What feels important about that to you?
Because for me, the way for these images to become in use and to become immortal is to have it kind of spread throughout the internet. I’m sure I’ve seen images, photographs that you made so many times before I even knew who you were, just because they're out there, they enter the world.
Christopher Michel: As a 57-year-old, I want to say thank you for saying that. That's the objective. We hope that the work that we're doing makes a difference, and it is cool that a lot of people do in fact recognize the photos. Hopefully we will make even more that people care about. What's so interesting is we really don't even know — you never know which of these are going to have a long half life.
Russia is proposing a rule that all foreigners in Moscow install a tracking app on their phones.
Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:
Residence location
Fingerprint
Face photograph
Real-time geo-location monitoring
This isn’t the first time we’ve seen this. Qatar did it in 2022 around the World Cup:
“After accepting the terms of these apps, moderators will have complete control of users’ devices,” he continued. “All personal content, the ability to edit it, share it, extract it as well as data from other apps on your device is in their hands. Moderators will even have the power to unlock users’ devices remotely.”
In November 2024, Badri and I applied for a Singapore visa to visit the country. To apply for a Singapore visa, you need to visit an authorized travel agent listed by the Singapore High Commission on their website. Unlike the Schengen visa (where only VFS can process applications), the Singapore visa has many authorized travel agents to choose from. I remember that the list mentioned as many as 25 authorized agents in Chennai. For my application, I randomly selected Ria International in Karol Bagh, New Delhi from the list.
Further, you need to apply not more than a month before your travel dates. As our travel dates were in December, we applied in the month of November.
For your reference, I submitted the following documents:
Passport
My photograph (35 mm x 45 mm)
Visa application form (Form 14A)
Cover letter to the Singapore High Commission, New Delhi
Proof of employment
Hotel booking
Flight ticket (reservations are sufficient)
Bank account statement for the last 6 months
I didn’t have my photograph in the specified dimensions, so the travel agent took my photo on the spot. The visa application was ₹2,567. Furthermore, I submitted my application on a Saturday and received a call from the travel agent on Tuesday informing me that they had received my visa from the Singapore High Commission.
The next day, I visit the travel agent’s office and picked up my passport and a black and white copy of my e-visa. Later, I downloaded a PDF of my visa from the website mentioned on it, and took a colored printout myself.
Singapore granted me a multiple-entry visa for 2 months, even though I had applied for a 4-day single-entry visa. We were planning to add more countries to this trip; therefore, a multiple-entry visa would be helpful in case we wanted to use Singapore Airport, as it has good connectivity. However, it turned out that flights from Kuala Lumpur were much cheaper than those from Singapore, so we didn’t enter Singapore again after leaving.
Badri also did the same process but entirely remotely—he posted the documents to the visa agency in Chennai, and got his e-visa in a few days followed by his original passport which was delivered by courier.
He got his photo taken in the same dimensions mentioned above, and printed as matte finish as instructed. However, the visa agents asked why his photo was looking so faded. We don’t know if they thought the matte finish was faded or what. To rectify this, Badri emailed them a digital copy of the photo to them (both the cropped version and the original) and they handled the reprinting on their end (which he never got to see).
Before entering Singapore, we had to fill an arrival card - an online form asking a few details about our trip - within 72 hours of our arrival in Singapore.
I’ve just got a second hand Nissan LEAF. It’s not nearly as luxurious as the Genesis EV that I test drove [1]. It’s also just over 5 years old so it’s not as slick as the MG4 I test drove [2]. But the going rate for a LEAF of that age is $17,000 vs $35,000 or more for a new MG4 or $130,000+ for a Genesis. At this time the LEAF is the only EV in Australia that’s available on the second hand market in quantity. Apparently the cheapest new EV in Australia is a Great Wall one which is $32,000 and which had a wait list last time I checked, so $17,000 is a decent price if you want an electric car and aren’t interested in paying the price of a new car.
Starting the Car
One thing I don’t like about most recent cars (petrol as well as electric) is that they needlessly break traditions of car design. Inserting a key and turning it clockwise to start a car is a long standing tradition that shouldn’t be broken without a good reason. With the use of traditional keys you know that when a car has the key removed it can’t be operated, there’s no situation of the person with the key walking away and leaving the car driveable and there’s no possibility of the owner driving somewhere without the key and then being unable to start it. To start a LEAF you have to have the key fob device in range, hold down the brake pedal, and then press the power button. To turn on accessories you do the same but without holding down the brake pedal. They also have patterns of pushes, push twice to turn it on, push three times to turn it off. This is all a lot easier with a key where you can just rotate it as many clicks as needed.
The change of car design for the key means that no physical contact is needed to unlock the car. If someone stands by a car fiddling with the door lock it will get noticed which deters certain types of crime. If a potential thief can sit in a nearby car to try attack methods and only walk to the target vehicle once it’s unlocked it makes the crime a lot easier. Even if the electronic key is as secure as a physical key allowing attempts to unlock remotely weakens security. Reports on forums suggest that the electronic key is vulnerable to replay attacks. I guess I just have to hope that as car thieves typically get less than 10% of the value of a car it’s just not worth their effort to steal a $17,000 car. Unlocking doors remotely is a common feature that’s been around for a while but starting a car without a key being physically inserted is a new thing.
Other Features
The headlights turn on automatically when the car thinks that the level of ambient light warrants it. There is an option to override this to turn on lights but no option to force the lights to be off. So if you have your car in the “on” state while parked the headlights will be on even if you are parked and listening to the radio.
The LEAF has a bunch of luxury features which seem a bit ridiculous like seat warmers. It also has a heated steering wheel which has turned out to be a good option for me as I have problems with my hands getting cold. According to the My Nissan LEAF Forum the seat warmer uses a maximum of 50W per seat while the car heater uses a minimum of 250W [3]. So if there are one or two people in the car then significantly less power is used by just heating the seats and also keeping the car air cool reduces window fog.
The Bluetooth audio support works well. I’ve done hands free calls and used it for playing music from my phone. This is the first car I’ve owned with Bluetooth support. It also has line-in which might have had some use in 2019 but is becoming increasingly useless as phones with Bluetooth become more popular. It has support for two devices connecting via Bluetooth at the same time which could be handy if you wanted to watch movies on a laptop or tablet while waiting for someone.
The LEAF has some of the newer safety features, it tracks lane markers and notifies the driver via beeps and vibration if they stray from their lane. It also tries to read speed limit signs and display the last observed speed limit on the dash display. It also has a skid alert which in my experience goes off under hard acceleration when it’s not skidding but doesn’t go off if you lose grip when cornering. The features for detecting changing lanes when close to other cars and for emergency braking when another car is partly in the lane (even if moving out of the lane) don’t seem well tuned for Australian driving, the common trend on Australian roads is lawful-evil to use DND terminology.
Range
My most recent driving was just over 2 hours driving with a distance of a bit over 100Km which took the battery from 62% to 14%. So it looks like I can drive a bit over 200Km at an average speed of 50Km/h. I have been unable to find out the battery size for my car, my model will have either a 40KWh or 62KWh battery. Google results say it should be printed on the B pillar (it’s not) and that it can be deduced from the VIN (it can’t). I’m guessing that my car is the cheaper option which is supposed to do 240Km when new which means that a bit over 200Km at an average speed of 50Km/h when 6yo is about what’s expected. If it has the larger battery designed to do 340Km then doing 200Km in real use would be rather disappointing.
Assuming the battery is 40KWh that means it’s 5Km/KWh or 10KW average for the duration. That means that the 250W or so used by the car heater should only make a about 2% difference to range which is something that a human won’t usually notice. If I was to drive to another state I’d definitely avoid using the heater or airconditioner as an extra 4km could really matter when trying to find a place to charge when you aren’t familiar with the area. It’s also widely reported that the LEAF is less efficient at highway speeds which is an extra difficulty for that.
It seems that the LEAF just isn’t designed for interstate driving in Australia, it would be fine for driving between provinces of the Netherlands as it’s difficult to drive for 200km without leaving that country. Driving 700km to another city in a car with 200km range would mean charging 3 times along the way, that’s 2 hours of charging time when using fast chargers. This isn’t a problem at all as the average household in Australia has 1.8 cars and the battery electric vehicles only comprise 6.3% of the market. So if a household had a LEAF and a Prius they could just use the Prius for interstate driving. A recent Prius could drive from Melbourne to Canberra or Adelaide without refuelling on the way.
If I was driving to another state a couple of times a year I could rent an old fashioned car to do that and still be saving money when compared to buying petrol all the time.
Running Cost
Currently I’m paying about $0.28 per KWh for electricity, it’s reported that the efficiency of charging a LEAF is as low as 83% with the best efficiency when fast charging. I don’t own the fast charge hardware and don’t plan to install it as that would require getting a replacement of the connection to my home from the street, a new switchboard, and other expenses. So I expect I’ll be getting 83% efficiency when charging which means 48KWh for 200KM or 96KWH for the equivalent of a $110 tank of petrol. At $0.28/KWh it will cost $26 for the same amount of driving as $110 of petrol. I also anticipate saving money on service as there’s no need for engine oil changes and all the other maintenance of a petrol engine and regenerative braking will reduce the incidence of brake pad replacement.
I expect to save over $1100 per annum on using electricity instead of petrol even if I pay the full rate. But if I charge my car in the middle of the day when there is over supply and I don’t get paid for feeding electricity from my solar panels into the grid (as is common nowadays) it could be almost free to charge the car and I could save about $1500 on fuel.
Comfort
Electric cars are much quieter than cars with petrol or Diesel engines which is a major luxury feature. This car is also significantly newer than any other car I’ve driven much so it has features like Bluetooth audio which weren’t in other cars I’ve driven. When doing 100Km/h I can hear a lot of noise from the airflow, part of that would be due to the LEAF not having the extreme streamlining features that are associated with Teslas (such as retracting door handles) and part of that would be due to the car being older and the door seals not being as good as they were when new. It’s still a very quiet car with a very smooth ride. It would be nice if they used the quality of seals and soundproofing that VW uses in the Passat but I guess the car would be heavier and have a shorter range if they did that.
This car has less space for the driver than any other car I’ve driven (with the possible exception of a 1989 Ford Laser AKA Mazda 323). The front seats have less space than the Prius. Also the batteries seem to be under the front seats so there’s a bulge in the floor going slightly in front of the front seats when they are moved back which gives less space for the front passenger to move their legs and less space for the driver when sitting in a parked car. There are a selection of electric cars from MG, BYD, and Great Wall that have more space in the front seats, if those cars were on the second hand market I might have made a different choice but a second hand LEAF is the only option for a cheap electric car in Australia now.
The heated steering wheel and heated seats took a bit of getting used to but I have come to appreciate the steering wheel and the heated seats are a good way of extending the range of the car.
Misc Notes
The LEAF is a fun car to drive and being quiet is a luxury feature, it’s no different to other EVs in this regard. It isn’t nearly as fast as a Tesla, but is faster than most cars actually drive on the road.
When I was looking into buying a LEAF from one of the car sales sites I was looking at models less than 5 years old. But the ZR1 series went from 2017 to 2023 so there’s probably not much difference between a 2019 model and a 2021 model but there is a significant price difference. I didn’t deliberately choose a 2019 car, it was what a relative was selling at a time when I needed a new car. But knowing what I know now I’d probably look at that age of LEAF if choosing from the car sales sites.
Problems
When I turn the car off the side mirrors fold in but when I turn it on they usually don’t automatically unfold if I have anything connected to the cigarette lighter power port. This is a well known problem and documented on forums. This is something that Nissan really should have tested before release because phone chargers that connect to the car cigarette lighter port have been common for at least 6 years before my car was manufactured and at least 4 years before the ZE1 model was released.
The built in USB port doesn’t supply enough power to match the power use of a Galaxy Note 9 running Google maps and playing music through Bluetooth. On it’s own this isn’t a big deal but combined with the mirror issue of using a charger in the cigarette lighter port it’s a problem.
The cover over the charging ports doesn’t seem to lock easily enough, I had it come open when doing 100Km/h on a freeway. This wasn’t a big deal but as the cover opens in a suicide-door manner at a higher speed it could have broken off.
The word is that LEAF service in Australia is not done well. Why do you need regular service of an electric car anyway? For petrol and Diesel cars it’s engine oil replacement that makes it necessary to have regular service. Surely you can just drive it until either the brakes squeak or the tires seem worn.
I have been having problems charging, sometimes it will charge from ~20% to 100% in under 24 hours, sometimes in 14+ hours it only gets to 30%.
Conclusion
This is a good car and the going price on them is low. I generally recommend them as long as you aren’t really big and aren’t too worried about the poor security.
It’s a fun car to drive even with a few annoying things like the mirrors not automatically extending on start.
The older ones like this are cheap enough that they should be able to cover the entire purchase cost in 10 years by the savings from not buying petrol even if you don’t drive a lot. With a petrol car I use about 13 tanks of petrol a year so my driving is about half the average for Australia. Some people could cover the purchase price of a second hand leaf in under 5 years.
Trying to send email. Email is hard. Configuration is hard. I don't remember how I send email properly. Trying to use git send-email since ages, and I think I am getting email bounces from random lists. SPF failure. Oh now.
Author: Hillary Lyon “I’d do it in a flash,” Jason declared, tightening the lid of the cocktail shaker. “Clone you, I mean. And how about you? What would you do?” In his hands, the shaker was a percussion instrument. The rhythm was enticing; it made Kerra want to dance. She gave him a teasing, crooked […]
Our anonymous submitter, whom we'll call Craig, worked for GlobalCon. GlobalCon relied on an offshore team on the other side of the world for adding/removing users from the system, support calls, ticket tracking, and other client services. One day at work, an urgent escalated ticket from Martin, the offshore support team lead, fell into Craig's queue. Seated before his cubicle workstation, Craig opened the ticket right away:
The new GlobalCon support website is not working. Appears to have been taken over by ChatGPT. The entire support team is blocked by this.
Instead of feeling any sense of urgency, Craig snorted out loud from perverse amusement.
"What was that now?" The voice of Nellie, his coworker, wafted over the cubicle wall that separated them.
"Urgent ticket from the offshore team," Craig replied.
"What is it this time?" Nellie couldn't suppress her glee.
"They're dead in the water because the new support page was, quote, taken over by ChatGPT."
Nellie laughed out loud.
"Hey! I know humor is important to surviving this job." A level, more mature voice piped up behind Craig from the cube across from his. It belonged to Dana, his manager. "But it really is urgent if they're all blocked. Do your best to help, escalate to me if you get stuck."
"OK, thanks. I got this," Craig assured her.
He was already 99.999% certain that no part of their web domain had gone down or been conquered by a belligerent AI, or else he would've heard of it by now. To make sure, Craig opened support.globalcon.com in a browser tab: sure enough, it worked. Martin had supplied no further detail, no logs or screenshots or videos, and no steps to reproduce, which was sadly typical of most of these escalations. At a loss, Craig took a screenshot of the webpage, opened the ticket, and posted the following: Everything's fine on this end. If it's still not working for you, let's do a screenshare.
Granted, a screensharing session was less than ideal given the 12-hour time difference. Craig hoped that whatever nefarious shenanigans ChatGPT had allegedly committed were resolved by now.
The next day, Craig received an update. Still not working. The entire team is still blocked. We're too busy to do a screenshare, please resolve ASAP.
Craig checked the website again with both laptop and phone. He had other people visit the website for him, trying different operating systems and web browsers. Every combination worked. Two things mystified him: how was the entire offshore team having this issue, and how were they "too busy" for anything if they were all dead in the water? At a loss, Craig attached an updated screenshot to the ticket and typed out the best CYA response he could muster. The new support website is up and has never experienced any issues. With no further proof or steps to reproduce this, I don't know what to tell you. I think a screensharing session would be the best thing at this point.
The next day, Martin parroted his last message almost word for word, except this time he assented to a screensharing session, suggesting the next morning for himself.
It was deep into the evening when Craig set up his work laptop on his kitchen counter and started a call and session for Martin to join. "OK. Can you show me what you guys are trying to do?"
To his surprise, he watched Martin open up Microsoft Teams first thing. From there, Martin accessed a chat to the entire offshore support team from the CPO of GlobalCon. The message proudly introduced the new support website and outlined the steps for accessing it. One of those steps was to visit support.globalcon.com.
The web address was rendered as blue outlined text, a hyperlink. Craig observed Martin clicking the link. A web browser opened up. Lo and behold, the page that finally appeared was www.chatgpt.com.
Craig blinked with surprise. "Hang on! I'm gonna take over for a second."
Upon taking control of the session, Craig switched back to Teams and accessed the link's details. The link text was correct, but the link destination was ChatGPT. It seemed like a copy/paste error that the CPO had tried to fix, not realizing that they'd needed to do more than simply update the link text.
"This looks like a bad link," Craig said. "It got sent to your entire team. And all of you have been trying to access the support site with this link?"
"Correct," Martin replied.
Craig was glad he couldn't be seen frowning and shaking his head. "Lemme show you what I've been doing. Then you can show everyone else, OK?"
After surrendering control of the session, Craig patiently walked Martin through the steps of opening a web browser, typing support.globalcon.com into the header, and hitting Return. The site opened without any issue. From there, Craig taught Martin how to create a bookmark for it.
"Just click on that from now on, and it'll always take you to the right place," Craig said. "In the future, before you click on any hyperlink, make sure you hover your mouse over it to see where it actually goes. Links can be labeled one thing when they actually take you somewhere else. That's how phishing works."
"Oh," Martin said. "Thanks!"
The call ended on a positive note, but left Craig marveling at the irony of lecturing the tech support lead on Internet 101 in the dead of night.
Author: Julian Miles, Staff Writer The bright lights look the same. Sitting myself down on the community server bench, I lean back until my spine hits the backrest. My gear starts charging. Diagnostics start scrolling down the inner bars of both eyes. The trick is not to try and read them. You’ll only give yourself […]
It's a holiday in the US today, so we're taking a long weekend. We flip back to a classic story of a company wanting to fill 15 different positions by hiring only one person. It's okay, Martin handles the database. Original - Remy
A curious email arrived in Phil's Inbox. "Windows Support Engineer required. Must have experience of the following:" and then a long list of Microsoft products.
Phil frowned. The location was convenient; the salary was fine, just the list of software seemed somewhat intimidating. Nevertheless, he replied to the agency saying that he was interested in applying for the position.
A few days later, Phil met Jason, the guy from the recruitment agency, in a hotel foyer. "It's a young, dynamic company", the recruiter explained,"They're growing really fast. They've got tons of funding and their BI Analysis Suite is positioning them to be a leading player in their field."
Phil nodded. "Ummm, I'm a bit worried about this list of products", referring to the job description. "I've never dealt with Microsoft Proxy Server 1.0, and I haven't dealt with Windows 95 OSR2 for a long while."
"Don't worry," Jason assured, "The Director is more an idea man. He just made a list of everything he's ever heard of. You'll just be supporting Windows Server 2003 and their flagship application."
Phil winced. He was a vanilla network administrator – supporting a custom app wasn't quite what he was looking for, but he desperately wanted to get out of his current job.
A few days later, Phil arrived for his interview. The company had rented smart offices on a new business park on the edge of town. He was ushered into the conference room, where he was joined by The Director and The Manager.
"So", said The Manager. "You've seen our brochure?"
"Yeah", said Phil, glancing at the glossy brochure in front of him with bright, Barbie-pink lettering all over it.
"You've seen a demo version of our application – what do you think?"
"Well, I think that it's great!", said Phil. He'd done his research – there were over 115 companies offering something very similar, and theirs wasn't anything special. "I particularly like the icons."
"Wonderful!" The Director cheered while firing up PowerPoint. "These are our servers. We rent some rack space in a data center 100 miles away." Phil looked at the projected picture. It showed a rack of a dozen servers.
"They certainly look nice." said Phil. They did look nice – brand new with green lights.
"Now, we also rent space in another data center on the other side of the country," The Manager added.
"This one is in a former cold-war bunker!" he said proudly. "It's very secure!" Phil looked up at another photo of some more servers.
"What we want the successful applicant to do is to take care of the servers on a day to day basis, but we also need to move those servers to the other data center", said The Director. "Without any interruption of service."
"Also, we need someone to set up the IT for the entire office. You know, email, file & print, internet access – that kind of thing. We've got a dozen salespeople starting next week, they'll all need email."
"And we need it to be secure."
"And we need it to be documented."
Phil was scribbled notes as best he could while the interviewing duo tag teamed him with questions.
"You'll also provide second line support to end users of the application."
"And day-to-day IT support to our own staff. Any questions?"
Phil looked up. "Ah… which back-end database does the application use?" he asked, expecting the answer would be SQL Server or perhaps Oracle, but The Director's reply surprised him.
"Oh, we wrote our own database from scratch. Martin wrote it." Phil realized his mouth was open, and shut it. The Director saw his expression, and explained. "You see, off the shelf databases have several disadvantages – the data gets fragmented, they're not quick enough, and so on. But don't have to worry about that – Martin takes care of the database. Do you have any more questions?"
Phil frowned. "So, to summarize: you want a data center guy to take care of your servers. You want someone to migrate the application from one data center to another, without any outage. You want a network administrator to set up, document and maintain an entire network from scratch. You want someone to provide internal support to the staff. And you want a second line support person to support the our flagship application."
"Exactly", beamed The Director paternally. "We want one person who can do all those things. Can you do that?"
Phil took a deep breath. "I don't know," he replied, and that was the honest answer.
"Right", The Manager said. "Well, if you have any questions, just give either of us a call, okay?"
Moments later, Phil was standing outside, clutching the garish brochure with the pink letters. His head was spinning. Could he do all that stuff? Did he want to? Was Martin a genius or a madman to reinvent the wheel with the celebrated database?
In the end, Phil was not offered the job and decided it might be best to stick it out at his old job for a while longer. After all, compared to Martin, maybe his job wasn't so bad after all.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.
A new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.
It would be hard for U.S. users to avoid the Chinese VPNs. The ownership of many appeared deliberately opaque, with several concealing their structure behind layers of offshore shell companies. TTP was able to determine the Chinese ownership of the 20 VPN apps being offered to Apple’s U.S. users by piecing together corporate documents from around the world. None of those apps clearly disclosed their Chinese ownership.
Author: Lydia Cline He had always had a quiet appreciation for blue. Not loudly, he would never be as conformist as to declare a love for, like, the number one colour for boys and men. No – he was loud in his love for green – the thinking man’s blue. And yet, as he stared […]
Nathan Gardels – editor of Noema magazine – offers in an issue a glimpse of the latest philosopher with a theory of history, or historiography. One that I'll briefly critique soon, as it relates much to today's topic. But first...
In a previous issue, Gardels offered valuable and wise insights about America’s rising cultural divide, leading to what seems to be a rancorous illiberal democracy.
Any glance at the recent electoral stats shows that while race & gender remain important issues, they did not affect outcomes as much as a deepening polar divide between America’s social castes, especially the less-educated vs. more-educated.
Although he does not refer directly to Marx, he is talking about a schism that my parents understood... between advanced proletariate and ignorant lumpen-proletariate.
Hey, this is not another of my finger-wagging lectures, urging you all to at least understand some basic patterns that the WWII generation knew very well, when they designed the modern world.
Still, you could start with Nathan's essay...
...though alas, in focusing on that divide, I'm afraid Nathan accepts an insidious premise. Recall that there is a third party to this neo-Marxian class struggle, that so many describe as simply polar.
== Start by stepping way back ==
There’s a big context, rooted in basic biology. Nearly all species have their social patterns warped by male reproductive strategies, mostly by males applying power against competing males.
(Regretable? Sure. Then let's over-rule Nature by becoming better. But that starts by looking at and understanding the hand that evolution dealt us.)
Among humans, this manifested for much more than 6000 years as feudal dominance by local gangs, then aristocracies, and then kings intent upon one central goal -- to ensure that their sons would inherit power.
Looking across all that time, till the near-present, I invite you to find any exceptions among societies with agriculture. That is, other than Periclean Athens and (maybe) da Vinci's Florence. This pattern - dominating nearly all continents and 99% of cultures across those 60 centuries is a dismal litany of malgovernance called 'history'. (And now it dominates the myths conveyed by Hollywood.)
Alas, large-scale history is never (and I mean never) discussed these days, even though variants of feudalism make up the entire backdrop -- the default human condition -- against which our recent Enlightenment has been a miraculous - but always threatened - experimental alternative.
The secret sauce of the Enlightenment, described by Adam Smith and established (at first crudely) by the U.S. Founders, consists of flattening the caste-order. Breaking up power into rival elites -- siccing them against each other in fair competition, and basing success far less on inheritance than other traits.
That, plus the empowerment of new players... an educated meritocracy in science, commerce, civil service and even the military. And gradually helping the children of the poor and former slaves to participate.
This achievement did augment with each generation – way too slowly, but incrementally – till the World War II Greatest Generation’s GI Bill and massive universities and then desegregation took it skyward, making America truly the titan of all ages and eras.
Karl Marx - whose past-oriented appraisals of class conflict were brilliant - proved to be a bitter, unimaginative dope when it came to projecting forward the rise of an educated middle class...
…which was the great innovation of the Roosevelteans, inviting the working classes into a growing and thriving middle class..
... an unexpected move that consigned Marx to the dustbin for 80 years...
... till his recent resurrection all around the globe, for reasons given below.
== There are three classes tussling here, not two ==
Which brings us to where Nathan Gardels’s missive is just plain wrong, alas. Accepting a line of propaganda that is now universally pervasive – he asserts that two – and only two – social classes are involved in a vast – socially antagonistic and polar struggle.
Are the lower middle classes (lumpenproletariat) currently at war against 'snooty fact elites'?Sure, they are!But so many post-mortems of the recent U.S. election blame the fact-professionals themselves, for behaving in patronizing ways toward working stiffs.
Meanwhile, such commentaries leave out entirely any mention of a 3rd set of players...
... the oligarchs, hedge lords, inheritance brats, sheiks and “ex”-commissars who have united in common cause. Those who stand most to benefit from dissonance within the bourgeoisie!
Elites who have been the chief beneficiaries of the last 40 years of 'supply side' and other tax grifts. Whose wealth disparities long ago surpassed those preceding the French Revolution. Many of whom are building lavish ‘prepper bunkers.' And who now see just one power center blocking their path to complete restoration of the default human system – feudal rule by inherited privilege.
That obstacle to feudal restoration? The fact professionals,whose use of science, plus rule-of-law and universities – plus uplift of poor children - keeps the social flatness prescription of Adam Smith alive.
And hence, those elites lavishly subsidize a world campaign to rile up lumpenprol resentment against science, law, medicine, civil servants... and yes, now the FBI and Intel and military officer corps.
A campaign that's been so successful that the core fact of this recent election – the way all of the adults in the first Trump Administration denounced him – is portrayed as a feature by today’s Republicans, rather than a fault. And yes, that is why none of the new Trump Appointees will ever, ever be adults-in-the-room.
== The ultimate, ironic revival of Marx, by those who should fear him most ==
Seriously. You can't see this incitement campaign in every evening's tirades, on Fox? Or spuming across social media, where ‘drinking the tears of know-it-alls’ is the common MAGA victory howl?
A hate campaign against snobby professionals that is vastly more intensive than any snide references to race or gender?
I beg you to control your gorge and actually watch Fox etc. Try actually counting the minutes spent exploiting the natural American SoA reflex (Suspicion of Authority) that I discuss in Vivid Tomorrows.A reflex which could become dangerous to oligarchs, if ever it turned on them!
And hence it must be diverted into rage and all-out war vs. all fact-using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.
To be clear, there are some professionals who have behaved stupidly, looking down their noses at the lower middle class.
Just as there are poor folks who appreciate their own university-educated kids, instead of resenting them.
And yes, there are scions of inherited wealth or billionaires (we know more than a few!) who are smart and decent enough to side with an Enlightenment that's been very good to them.
Alas, the agitprop campaign that I described here has been brilliantly successful, including massively popular cultural works extolling feudalism as the natural human forms of governance. (e.g. Tolkien, Dune, Star Wars, Game of Thrones... and do you seriously need more examples in order to realize that it's deliberate?)
They aren’t wrong! Feudalism is the ‘natural’ form of human governance.
In fact, its near universality may be a top theory to explain the Fermi Paradox!
… A trap/filter that prevents any race from rising to the stars.
== Would I rather not have been right? ==
One of you pointed out "Paul Krugman's post today echoes Dr B's warnings aboutMAGA vs Science.":
"But why do our new rulers want to destroy science in America? Sadly, the answer is obvious: Science has a tendency to tell you things you may not want to hear. .... And one thing we know about MAGA types is that they are determined to hold on to their prejudices. If science conflicts with those prejudices, they don’t want to know, and they don’t want anyone else to know either."
Krugman is the smartest current acolyte of Hari Seldon. Except maybe for Robert Reich. And still, they don't see the big picture.
== Stop giving the first-estate a free pass ==
And so, I conclude.
Whenever you find yourself discussing class war between the lower proletariats and snooty bourgeoisie, remember that the nomenclature – so strange and archaic-sounding, today – was quite familiar to our parents and grandparents.
Moreover, it included a third caste! The almost perpetual winners, across 600 decades. The bane on fair competition that was diagnosed by both Adam Smith and Karl Marx. And one that's deeply suicidal, as today's moguls - masturbating to the chants of flatterers - seem determined to repeat every mistake that led their predecessors to tumbrels and guillotines.
With some exceptions – those few who are truly noble of mind and heart – they are right now busily resurrecting every Marxian scenario from the grave…
… or from torpor where they had been cast by the Roosevelteans.
And the rich fools are doing so by fomenting longstanding cultural grudges for – or against – modernity.
The same modernity that gave them everything they have and that laid all of their golden eggs.
If anything proves the inherent stupidity of that caste – (most of them) - it is their ill-education about Marx! And what he will mean to new generations, if the Enlightenment cannot be recharged and restored enough to put old Karl back to sleep.
Author: Emily Kinsey “Jessie! Get over here, I think I found something!” Annoyed, Jessie said, “You always think you found something.” “It smells good,” I offered, hoping to entice him. It worked, because Jessie only ever cares about his stomach. He discarded his half-gnawed jerky and hobbled over to inspect my findings. “What’d you think […]
Underqualified
Mike S.
is suffering a job hunt.
"I could handle uD83D and uDC77 well enough, but I am a little short
of uD83C and the all important uDFFE requirement."
Frank forecasts frustration.
"The weather app I'm using seems to be a bit confused on my location as I'm on vacation right now."
It would be a simple matter for the app to simply identify
each location, if it can't meaningfully choose only one.
Marc Würth
is making me hungry. Says Marc
"I was looking through my Evernote notes for "transactional" (email service). It didn't find anything. Evernote, though, tried to be helpful and thought I was looking for some basil (German "Basilikum")."
That is not the King,"
Brendan
commented.
"I posted this on Discord, and my friend responded with "They have
succeeded in alignment. Their AI is truly gender blind." Not only
gender-blind but apparently also existence-blind as well. I think the Bard might have
something quotable here as well but it escapes me. Comment section is open.
Author: Stephen C. Curro Veema peered through the glass pod at their latest subject. The human was young, perhaps eighteen years by his species’ standards. Her four eyes noted physical traits and the style of clothing. “Flannel shirt. Denim pants. Heavy boots. This one was hiking?” “Camping,” Weez replied. “The trap caught his backpack, too. […]
The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.
DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.
Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.
Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.
The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”
According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.
“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”
The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.
“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”
Image: welivesecurity.com
A statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYMRU, and ZScaler.
It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.
As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.
The public charging of the 16 DanaBot defendants comes a day after Microsoftjoined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.
This article gives a good rundown of the security risks of Windows Recall, and the repurposed copyright protection took that Signal used to block the AI feature from scraping Signal data.
Author: Rebecca Hamlin Green I honestly didn’t know where else to turn or if you’re even accepting these requests yourself. I hope you hear me out at least. The day she came, she was perfect, she really was. I almost couldn’t believe it. Everything I thought I knew was, somehow, irrelevant and profound at the […]
Mark sends us a very simple Java function which has the job of parsing an integer from a string. Now, you might say, "But Java has a built in for that, Integer.parseInt," and have I got good news for you: they actually used it. It's just everything else they did wrong.
This function is really the story of variable i, the most useless variable ever. It's doing its best, but there's just nothing for it to do here.
We start by setting i to zero. Then we attempt to parse the integer, and do nothing with the result. If it fails, we set i to zero again, just for fun, and then return i. Why not just return 0? Because then what would poor i get to do?
Assuming we didn't throw an exception, we parse the input again, storing its result in i, and then return i. Again, we treat i like a child who wants to help paint the living room: we give it a dry brush and a section of wall we're not planning to paint and let it go to town. Nothing it does matters, but it feels like a participant.
Now, Mark went ahead and refactored this function basically right away, into a more terse and clear version:
He went about his development work, and then a few days later came across makeInteger reverted back to its original version. For a moment, he wanted to be mad at someone for reverting his change, but no- this was in an entirely different class. With that information, Mark went and did a search for makeInteger in the code, only to find 39 copies of this function, with minor variations.
There are an unknown number of copies of the function where the name is slightly different than makeInteger, but a search for Integer.parseInt implies that there may be many more.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Technology and innovation have transformed every part of society, including our electoral experiences. Campaigns are spending and doing more than at any other time in history. Ever-growing war chests fuel billions of voter contacts every cycle. Campaigns now have better ways of scaling outreach methods and offer volunteers and donors more efficient ways to contribute time and money. Campaign staff have adapted to vast changes in media and social media landscapes, and use data analytics to forecast voter turnout and behavior.
Yet despite these unprecedented investments in mobilizing voters, overall trust in electoral health, democratic institutions, voter satisfaction, and electoral engagement has significantly declined. What might we be missing?
In software development, the concept of user experience (UX) is fundamental to the design of any product or service. It’s a way to think holistically about how a user interacts with technology. It ensures that products and services are built with the users’ actual needs, behaviors, and expectations in mind, as opposed to what developers think users want. UX enables informed decisions based on how the user will interact with the system, leading to improved design, more effective solutions, and increased user satisfaction. Good UX design results in easy, relevant, useful, positive experiences. Bad UX design leads to unhappy users.
This is not how we normally think of elections. Campaigns measure success through short-term outputs—voter contacts, fundraising totals, issue polls, ad impressions—and, ultimately, election results. Rarely do they evaluate how individuals experience this as a singular, messy, democratic process. Each campaign, PAC, nonprofit, and volunteer group may be focused on their own goal, but the voter experiences it all at once. By the time they’re in line to vote, they’ve been hit with a flood of outreach—spammy texts from unfamiliar candidates, organizers with no local ties, clunky voter registration sites, conflicting information, and confusing messages, even from campaigns they support. Political teams can point to data that justifies this barrage, but the effectiveness of voter contact has been steadily declining since 2008. Intuitively, we know this approach has long-term costs. To address this, let’s evaluate the UX of an election cycle from the point of view of the end user, the everyday citizen.
Specifically, how might we define the UX of an election cycle: the voter experience (VX)? A VX lens could help us see the full impact of the electoral cycle from the perspective that matters most: the voters’.
For example, what if we thought about elections in terms of questions like these?
How do voters experience an election cycle, from start to finish?
How do voters perceive their interactions with political campaigns?
What aspects of the election cycle do voters enjoy? What do they dislike? Do citizens currently feel fulfilled by voting?
If voters “tune out” of politics, what part of the process has made them want to not pay attention?
What experiences decrease the number of eligible citizens who register and vote?
Are we able to measure the cumulative impacts of political content interactions over the course of multiple election cycles?
Can polls or focus groups help researchers learn about longitudinal sentiment from citizens as they experience multiple election cycles?
If so, what would we want to learn in order to bolster democratic participation and trust in institutions?
Thinking in terms of VX can help answer these questions. Moreover, researching and designing around VX could help identify additional metrics, beyond traditional turnout and engagement numbers, that better reflect the collective impact of campaigning: of all those voter contact and persuasion efforts combined.
This isn’t a radically new idea, and earlier efforts to embed UX design into electoral work yielded promising early benefits. In 2020, a coalition of political tech builders created a Volunteer Experience program. The group held design sprints for political tech tools, such as canvassing apps and phone banking sites. Their goal was to apply UX principles to improve the volunteer user flow, enhance data hygiene, and improve volunteer retention. If a few sprints can improve the phone banking experience, imagine the transformative possibilities of taking this lens to the VX as a whole.
If we want democracy to thrive long-term, we need to think beyond short-term wins and table stakes. This isn’t about replacing grassroots organizing or civic action with digital tools. Rather, it’s about learning from UX research methodology to build lasting, meaningful engagement that involves both technology and community organizing. Often, it is indeed local, on-the-ground organizers who have been sounding the alarm about the long-term effects of prioritizing short-term tactics. A VX approach may provide additional data to bolster their arguments.
Learnings from a VX analysis of election cycles could also guide the design of new programs that not only mobilize voters (to contribute, to campaign for their candidates, and to vote), but also ensure that the entire process of voting, post-election follow-up, and broader civic participation is as accessible, intuitive, and fulfilling as possible. Better voter UX will lead to more politically engaged citizens and higher voter turnout.
VX methodology may help combine real-time citizen feedback with centralized decision-making. Moving beyond election cycles, focusing on the citizen UX could accelerate possibilities for citizens to provide real-time feedback, review the performance of elected officials and government, and receive help-desk-style support with the same level of ease as other everyday “products.” By understanding how people engage with civic life over time, we can better design systems for citizens that strengthen participation, trust, and accountability at every level.
Our hope is that this approach, and the new data and metrics uncovered by it, will support shifts that help restore civic participation and strengthen trust in institutions. With citizens oriented as the central users of our democratic systems, we can build new best practices for fulfilling civic infrastructure that foster a more effective and inclusive democracy.
The time for this is now. Despite hard-fought victories and lessons learned from failures, many people working in politics privately acknowledge a hard truth: our current approach isn’t working. Every two years, people build campaigns, mobilize voters, and drive engagement, but they are held back by what they don’t understand about the long-term impact of their efforts. VX thinking can help solve that.
I run my own mail server. I have run it since about 1995, initially on a 28k8 modem connection but the connection improved as technology became cheaper and now I’m running it on a VM on a Hetzner server which is also running domains for some small businesses. I make a small amount of money running mail services for those companies but generally not enough to make it profitable. From a strictly financial basis I might be better off just using a big service, but I like having control over my own email. If email doesn’t arrive I can read the logs to find out why.
I repeatedly have issues of big services not accepting mail. The most recent is the MS services claiming that my IP has a bad ratio of good mail to spam and blocked me so I had to tunnel that through a different IP address. It seems that the way things are going is that if you run a small server companies like MS can block you even though your amount of spam is low but if you run a large scale service that is horrible for sending spam then you don’t get blocked.
For most users they just use one of the major email services (Gmail or Microsoft) and find that no-one blocks them because those providers are too big to block and things mostly work. Until of course the company decides to cancel their account.
What we need is for each independent jurisdiction to have it’s own email infrastructure, that means controlling DNS servers for their domains, commercial and government mail services on those domains, running the servers for those services on hardware located in the jurisdiction and run by people based in that jurisdiction and citizens of it. I say independent jurisdiction because there are groups like the EU which have sufficient harmony of laws to not require different services. With the current EU arrangements I don’t think it’s possible for the German government to block French people from accessing email or vice versa.
While Australia and New Zealand have a long history of cooperation there’s still the possibility of a lying asshole like Scott Morrison trying something on so New Zealanders shouldn’t feel safe using services run in Australia. Note that Scott Morrison misled his own parliamentary colleagues about what he was doing and got himself assigned as a secret minister [2] demonstrating that even conservatives can’t trust someone like him. With the ongoing human rights abuses by the Morrison government it’s easy to imagine New Zealand based organisations that protect human rights being treated by the Australian government in the way that the ICC was treated by the US government.
The Problem with Partial Solutions
Now it would be very easy for the ICC to host their own mail servers and they probably will do just that in the near future. I’m sure that there are many companies offering to set them up accounts in a hurry to deal with this (probably including some of the Dutch companies I’ve worked for). Let’s imagine for the sake of discussion that the ICC has their own private server, the US government could compel Google and MS to block the IP addresses of that server and then at least 1/3 of the EU population won’t get mail from them. If the ICC used email addresses hosted on someone else’s server then Google and MS could be compelled to block the addresses in question for the same result. The ICC could have changing email addresses to get around block lists and there could be a game of cat and mouse between the ICC and the US government but that would just be annoying for everyone.
The EU needs to have services hosted and run in their jurisdiction that are used by the vast majority of the people in the country. The more people who are using services outside the control of hostile governments the lesser the impact of bad IT policies by those hostile governments.
One possible model to consider is the Postbank model. Postbank is a bank run in the Netherlands from post offices which provides services to people deemed unprofitable for the big banks. If the post offices were associated with a mail service you could have it government subsidised providing free service for citizens and using government ID if the user forgets their password. You could also have it provide a cheap service for non-citizen residents.
Other Problems
What will the US government do next? Will they demand that Apple and Google do a remote-wipe on all phones run by ICC employees? Are they currently tracking all ICC employees by Android and iPhone services?
Huawei’s decision to develop their own phone OS was a reasonable one but there’s no need to go that far. Other governments could setup their own equivalent to Google Play services for Android and have their own localised Android build. Even a small country like Australia could get this going for the services of calendaring etc. But the app store needs a bigger market. There’s no reason why Android has to tie the app store to the services for calendaring etc. So you could have a per country system for calendaring and a per region system for selling apps.
The invasion of Amazon services such as Alexa is also a major problem for digital sovereignty. We need government controls about this sort of thing, maybe have high tariffs on the import of all hardware that can only work with a single cloud service. Have 100+% tariffs on every phone, home automation system, or networked device that is either tied to a single cloud service or which can’t work in a usable manner on other cloud services.
Frank inherited some code that reads URLs from a file, and puts them into a collection. This is a delightfully simple task. What could go wrong?
static String[] readFile(String filename) {
Stringrecord=null;
VectorvURLs=newVector();
intrecCnt=0;
try {
FileReaderfr=newFileReader(filename);
BufferedReaderbr=newBufferedReader(fr);
record = newString();
while ((record = br.readLine()) != null) {
vURLs.add(newString(record));
//System.out.println(recCnt + ": " + vURLs.get(recCnt));
recCnt++;
}
} catch (IOException e) {
// catch possible io errors from readLine()
System.out.println("IOException error reading " + filename + " in readURLs()!\n");
e.printStackTrace();
}
System.out.println("Reading URLs ...\n");
intarrCnt=0;
String[] sURLs = newString[vURLs.size()];
EnumerationeURLs= vURLs.elements();
for (Enumeratione= vURLs.elements() ; e.hasMoreElements() ;) {
sURLs[arrCnt] = (String)e.nextElement();
System.out.println(arrCnt + ": " + sURLs[arrCnt]);
arrCnt++;
}
if (recCnt != arrCnt++) {
System.out.println("WARNING: The number of URLs in the input file does not match the number of URLs in the array!\n\n");
}
return sURLs;
} // end of readFile()
So, we start by using a FileReader and a BufferedReader, which is the basic pattern any Java tutorial on file handling will tell you to do.
What I see here is that the developer responsible didn't fully understand how strings work in Java. They initialize record to a new String() only to immediately discard that reference in their while loop. They also copy the record by doing a new String which is utterly unnecessary.
As they load the Vector of strings, they also increment a recCount variable, which is superfluous since the collection can tell you how many elements are in it.
Once the Vector is populated, they need to copy all this data into a String[]. Instead of using the toArray function, which is built in and does that, they iterate across the Vector and put each element into the array.
As they build the array, they increment an arrCnt variable. Then, they do a check: if (recCnt != arrCnt++). Look at that line. Look at the post-increment on arrCnt, despite never using arrCnt again. Why is that there? Just for fun, apparently. Why is this check even there?
The only way it's possible for the counts to not match is if somehow an exception was thrown aftervURLs.add(new String(record)); but before recCount++, which doesn't seem likely. Certainly, if it happens, there's something worse going on.
Now, I'm going to be generous and assume that this code predates Java 8- it just looks old. But it's worth noting that in Java 8, the BufferedReader class got a lines() function which returns a Stream<String> that can be converted directly toArray, making all of this code superfluous, but also, so much of this code is just superfluous anyway.
Anyway, for a fun game, start making the last use of every variable be a post-increment before it goes out of scope. See how many code reviews you can sneak it through!
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: David Barber Inhabitants of Earth, (read a translation of the first signal from the stars) Our clients wish to bring to your attention a copyright infringement with regards to your use of replicator molecules. As the dominant species on your world, we hereby inform you to cease and desist using DNA except under licence […]
I already knew about the declining response rate for polls and surveys. The percentage of AI bots that respond to surveys is also increasing.
Solutions are hard:
1. Make surveys less boring.
We need to move past bland, grid-filled surveys and start designing experiences people actually want to complete. That means mobile-first layouts, shorter runtimes, and maybe even a dash of storytelling. TikTok or dating app style surveys wouldn’t be a bad idea or is that just me being too much Gen Z?
2. Bot detection.
There’s a growing toolkit of ways to spot AI-generated responses—using things like response entropy, writing style patterns or even metadata like keystroke timing. Platforms should start integrating these detection tools more widely. Ideally, you introduce an element that only humans can do, e.g., you have to pick up your price somewhere in-person. Btw, note that these bots can easily be designed to find ways around the most common detection tactics such as Captcha’s, timed responses and postcode and IP recognition. Believe me, way less code than you suspect is needed to do this.
3. Pay people more.
If you’re only offering 50 cents for 10 minutes of mental effort, don’t be surprised when your respondent pool consists of AI agents and sleep-deprived gig workers. Smarter, dynamic incentives—especially for underrepresented groups—can make a big difference. Perhaps pay-differentiation (based on simple demand/supply) makes sense?
4. Rethink the whole model.
Surveys aren’t the only way to understand people. We can also learn from digital traces, behavioral data, or administrative records. Think of it as moving from a single snapshot to a fuller, blended picture. Yes, it’s messier—but it’s also more real.
KrebsOnSecurity last week was hit by a near record distributed denial-of-service (DDoS) attack that clocked in at more than 6.3 terabits of data per second (a terabit is one trillion bits of data). The brief attack appears to have been a test run for a massive new Internet of Things (IoT) botnet capable of launching crippling digital assaults that few web destinations can withstand. Read on for more about the botnet, the attack, and the apparent creator of this global menace.
For reference, the 6.3 Tbps attack last week was ten times the size of the assault launched against this site in 2016 by the Mirai IoT botnet, which held KrebsOnSecurity offline for nearly four days. The 2016 assault was so large that Akamai – which was providing pro-bono DDoS protection for KrebsOnSecurity at the time — asked me to leave their service because the attack was causing problems for their paying customers.
Since the Mirai attack, KrebsOnSecurity.com has been behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content. Google Security Engineer Damian Menscher told KrebsOnSecurity the May 12 attack was the largest Google has ever handled. In terms of sheer size, it is second only to a very similar attack that Cloudflare mitigated and wrote about in April.
After comparing notes with Cloudflare, Menscher said the botnet that launched both attacks bears the fingerprints of Aisuru, a digital siege machine that first surfaced less than a year ago. Menscher said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second.
“It was the type of attack normally designed to overwhelm network links,” Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). “For most companies, this size of attack would kill them.”
A graph depicting the 6.5 Tbps attack mitigated by Cloudflare in April 2025. Image: Cloudflare.
The Aisuru botnet comprises a globally-dispersed collection of hacked IoT devices, including routers, digital video recorders and other systems that are commandeered via default passwords or software vulnerabilities. As documented by researchers at QiAnXin XLab, the botnet was first identified in an August 2024 attack on a large gaming platform.
Aisuru reportedly went quiet after that exposure, only to reappear in November with even more firepower and software exploits. In a January 2025 report, XLab found the new and improved Aisuru (a.k.a. “Airashi“) had incorporated a previously unknown zero-day vulnerability in Cambium Networks cnPilot routers.
NOT FORKING AROUND
The people behind the Aisuru botnet have been peddling access to their DDoS machine in public Telegram chat channels that are closely monitored by multiple security firms. In August 2024, the botnet was rented out in subscription tiers ranging from $150 per day to $600 per week, offering attacks of up to two terabits per second.
“You may not attack any measurement walls, healthcare facilities, schools or government sites,” read a notice posted on Telegram by the Aisuru botnet owners in August 2024.
Interested parties were told to contact the Telegram handle “@yfork” to purchase a subscription. The account @yfork previously used the nickname “Forky,” an identity that has been posting to public DDoS-focused Telegram channels since 2021.
According to the FBI, Forky’s DDoS-for-hire domains have been seized in multiple law enforcement operations over the years. Last year, Forky said on Telegram he was selling the domain stresser[.]best, which saw its servers seized by the FBI in 2022 as part of an ongoing international law enforcement effort aimed at diminishing the supply of and demand for DDoS-for-hire services.
“The operator of this service, who calls himself ‘Forky,’ operates a Telegram channel to advertise features and communicate with current and prospective DDoS customers,” reads an FBI seizure warrant (PDF) issued for stresser[.]best. The FBI warrant stated that on the same day the seizures were announced, Forky posted a link to a story on this blog that detailed the domain seizure operation, adding the comment, “We are buying our new domains right now.”
A screenshot from the FBI’s seizure warrant for Forky’s DDoS-for-hire domains shows Forky announcing the resurrection of their service at new domains.
Approximately ten hours later, Forky posted again, including a screenshot of the stresser[.]best user dashboard, instructing customers to use their saved passwords for the old website on the new one.
A review of Forky’s posts to public Telegram channels — as indexed by the cyber intelligence firms Unit 221B and Flashpoint — reveals a 21-year-old individual who claims to reside in Brazil [full disclosure: Flashpoint is currently an advertiser on this blog].
Since late 2022, Forky’s posts have frequently promoted a DDoS mitigation company and ISP that he operates called botshield[.]io. The Botshield website is connected to a business entity registered in the United Kingdom called Botshield LTD, which lists a 21-year-old woman from Sao Paulo, Brazil as the director. Internet routing records indicate Botshield (AS213613) currently controls several hundred Internet addresses that were allocated to the company earlier this year.
Domaintools.com reports that botshield[.]io was registered in July 2022 to a Kaike Southier Leite in Sao Paulo. A LinkedIn profile by the same name says this individual is a network specialist from Brazil who works in “the planning and implementation of robust network infrastructures, with a focus on security, DDoS mitigation, colocation and cloud server services.”
MEET FORKY
Image: Jaclyn Vernace / Shutterstock.com.
In his posts to public Telegram chat channels, Forky has hardly attempted to conceal his whereabouts or identity. In countless chat conversations indexed by Unit 221B, Forky could be seen talking about everyday life in Brazil, often remarking on the extremely low or high prices in Brazil for a range of goods, from computer and networking gear to narcotics and food.
Reached via Telegram, Forky claimed he was “not involved in this type of illegal actions for years now,” and that the project had been taken over by other unspecified developers. Forky initially told KrebsOnSecurity he had been out of the botnet scene for years, only to concede this wasn’t true when presented with public posts on Telegram from late last year that clearly showed otherwise.
Forky denied being involved in the attack on KrebsOnSecurity, but acknowledged that he helped to develop and market the Aisuru botnet. Forky claims he is now merely a staff member for the Aisuru botnet team, and that he stopped running the botnet roughly two months ago after starting a family. Forky also said the woman named as director of Botshield is related to him.
Forky offered equivocal, evasive responses to a number of questions about the Aisuru botnet and his business endeavors. But on one point he was crystal clear:
“I have zero fear about you, the FBI, or Interpol,” Forky said, asserting that he is now almost entirely focused on their hosting business — Botshield.
Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.
DomainTools finds the same Sao Paulo street address in the registration records for botshield[.]io was used to register several other domains, including cant-mitigate[.]us. The email address in the WHOIS records for that domain is forkcontato@gmail.com, which DomainTools says was used to register the domain for the now-defunct DDoS-for-hire service stresser[.]us, one of the domains seized in the FBI’s 2023 crackdown.
On May 8, 2023, the U.S. Department of Justiceannounced the seizure of stresser[.]us, along with a dozen other domains offering DDoS services. The DOJ said ten of the 13 domains were reincarnations of services that were seized during a prior sweep in December, which targeted 48 top stresser services (also known as “booters”).
Forky claimed he could find out who attacked my site with Aisuru. But when pressed a day later on the question, Forky said he’d come up empty-handed.
“I tried to ask around, all the big guys are not retarded enough to attack you,” Forky explained in an interview on Telegram. “I didn’t have anything to do with it. But you are welcome to write the story and try to put the blame on me.”
THE GHOST OF MIRAI
The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief — lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google’s Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet’s capabilities.
In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.
As first revealed by KrebsOnSecurity in January 2017, the Mirai authors were two U.S. men who co-ran a DDoS mitigation service — even as they were selling far more lucrative DDoS-for-hire services using the most powerful botnet on the planet.
Less than a week after the Mirai botnet was used in a days-long DDoS against KrebsOnSecurity, the Mirai authors published the source code to their botnet so that they would not be the only ones in possession of it in the event of their arrest by federal investigators.
Ironically, the leaking of the Mirai source is precisely what led to the eventual unmasking and arrest of the Mirai authors, who went on to serve probation sentences that required them to consult with FBI investigators on DDoS investigations. But that leak also rapidly led to the creation of dozens of Mirai botnet clones, many of which were harnessed to fuel their own powerful DDoS-for-hire services.
Menscher told KrebsOnSecurity that as counterintuitive as it may sound, the Internet as a whole would probably be better off if the source code for Aisuru became public knowledge. After all, he said, the people behind Aisuru are in constant competition with other IoT botnet operators who are all striving to commandeer a finite number of vulnerable IoT devices globally.
Such a development would almost certainly cause a proliferation of Aisuru botnet clones, he said, but at least then the overall firepower from each individual botnet would be greatly diminished — or at least within range of the mitigation capabilities of most DDoS protection providers.
Barring a source code leak, Menscher said, it would be nice if someone published the full list of software exploits being used by the Aisuru operators to grow their botnet so quickly.
“Part of the reason Mirai was so dangerous was that it effectively took out competing botnets,” he said. “This attack somehow managed to compromise all these boxes that nobody else knows about. Ideally, we’d want to see that fragmented out, so that no [individual botnet operator] controls too much.”
Combining Java with lower-level bit manipulations is asking for trouble- not because the language is inadequate to the task, but because so many of the developers who work in Java are so used to working at a high level they might not quite "get" what they need to do.
Victor inherited one such project, which used bitmasks and bitwise operations a great deal, based on the network protocol it implemented. Here's how the developers responsible created their bitmasks:
So, the first thing that's important to note, is that Java does support hex literals, so 0xFFFFFFFF is a perfectly valid literal. So we don't need to create a string and parse it. But we also don't need to make a constant simply named FFFFFFFF, which is just the old twenty = 20 constant pattern: technically you've made a constant but you haven't actually made the magic number go away.
Of course, this also isn't actually a constant, so it's entirely possible that FFFFFFFF could hold a value which isn't0xFFFFFFFF.
Author: Fawkes Defries Stuck out in the black sand, lodged between trunks of thin stone, Kayt lit life to her cigarette and drew the clear smoke in. Her silicon eyes fluttered between the deactivated droid she’d excavated from the Rubble and her sister’s body lying opposite. Naeva had been deep in the rot dead for […]
A DoorDash driver stole over $2.5 million over several months:
The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.
Interesting flaw in the software design. He probably would have gotten away with it if he’d kept the numbers small. It’s only when the amount missing is too big to ignore that the investigations start.
Weirdly, this is the second time the NSA has declassified the document. John Young got a copy in 2019. This one has a few less redactions. And nothing that was provided in 2019 was redacted here.
If you find anything interesting in the document, please tell us about it in the comments.
Kate inherited a system where Java code generates JavaScript (by good old fashioned string concatenation) and embeds it into an output template. The Java code was written by someone who didn't fully understand Java, but JavaScript was also a language they didn't understand, and the resulting unholy mess was buggy and difficult to maintain.
Why trying to debug the JavaScript, Kate had to dig through the generated code, which led to this little representative line:
The byId function is an alias to the browser's document.getElementById function. The ID on display here is clearly generated by the Java code, resulting in an absolutely cursed ID for an element in the page. The semicolons are field separators, which means you can parse the ID to get other information about it. I have no idea what the 12means, but it clearly means something. Then there's that long kebab-looking string. It seems like maybe some sort of hierarchy information? But maybe not, because fileadmin appears twice? Why are there so many dashes? If I got an answer to that question, would I survive it? Would I be able to navigate the world if I understood the dark secret of those dashes? Or would I have to give myself over to our Dark Lords and dedicate my life to bringing about the end of all things?
Like all good representative lines, this one hints at darker, deeper evils in the codebase- the code that generates (or parses) this ID must be especially cursed.
The only element which needs to have its isLocked attribute set to true is the developer responsible for this: they must be locked away before they harm the rest of us.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Julian Miles, Staff Writer Kinswaller reads the report with a mounting feeling of doom: another failure, this time with casualties on both sides. The appended note from the monitoring A.I. cements the feeling. ‘Have recommended Field Combat Intervention. Combat zone and planetary data was requested. It has been supplied. In response, an Operative has […]