Planet Russell

,

365 TomorrowsBlack Hole Ethics

Author: David Barber Over the centuries it had become a tradition for the Immortal Emperor to wash away His guilt in ceremonies at the Schwarzschild radius. Since nothing escaped the event horizon of a black hole, awful secrets could be whispered there, cruelties, mistakes and bad karma consigned to the singularity, and history begun afresh. […]

The post Black Hole Ethics appeared first on 365tomorrows.

Planet DebianRussell Coker: Solving Spam and Phishing for Corporations

Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

Krebs on SecurityFintech Giant Finastra Investigating Data Breach

The financial technology firm Finastra is investigating the alleged large-scale theft of information from its internal file transfer platform, KrebsOnSecurity has learned. Finastra, which provides software and services to 45 of the world’s top 50 banks, notified customers of the security incident after a cybercriminal began selling more than 400 gigabytes of data purportedly stolen from the company.

London-based Finastra has offices in 42 countries and reported $1.9 billion in revenues last year. The company employs more than 7,000 people and serves approximately 8,100 financial institutions around the world. A major part of Finastra’s day-to-day business involves processing huge volumes of digital files containing instructions for wire and bank transfers on behalf of its clients.

On November 8, 2024, Finastra notified financial institution customers that on Nov. 7 its security team detected suspicious activity on Finastra’s internally hosted file transfer platform. Finastra also told customers that someone had begun selling large volumes of files allegedly stolen from its systems.

“On November 8, a threat actor communicated on the dark web claiming to have data exfiltrated from this platform,” reads Finastra’s disclosure, a copy of which was shared by a source at one of the customer firms.

“There is no direct impact on customer operations, our customers’ systems, or Finastra’s ability to serve our customers currently,” the notice continued. “We have implemented an alternative secure file sharing platform to ensure continuity, and investigations are ongoing.”

But its notice to customers does indicate the intruder managed to extract or “exfiltrate” an unspecified volume of customer data.

“The threat actor did not deploy malware or tamper with any customer files within the environment,” the notice reads. “Furthermore, no files other than the exfiltrated files were viewed or accessed. We remain focused on determining the scope and nature of the data contained within the exfiltrated files.”

In a written statement in response to questions about the incident, Finastra said it has been “actively and transparently responding to our customers’ questions and keeping them informed about what we do and do not yet know about the data that was posted.” The company also shared an updated communication to its clients, which said while it was still investigating the root cause, “initial evidence points to credentials that were compromised.”

“Additionally, we have been sharing Indicators of Compromise (IOCs) and our CISO has been speaking directly with our customers’ security teams to provide updates on the investigation and our eDiscovery process,” the statement continues. Here is the rest of what they shared:

“In terms of eDiscovery, we are analyzing the data to determine what specific customers were affected, while simultaneously assessing and communicating which of our products are not dependent on the specific version of the SFTP platform that was compromised. The impacted SFTP platform is not used by all customers and is not the default platform used by Finastra or its customers to exchange data files associated with a broad suite of our products, so we are working as quickly as possible to rule out affected customers. However, as you can imagine, this is a time-intensive process because we have many large customers that leverage different Finastra products in different parts of their business. We are prioritizing accuracy and transparency in our communications.

Importantly, for any customers who are deemed to be affected, we will be reaching out and working with them directly.”

On Nov. 8, a cybercriminal using the nickname “abyss0” posted on the English-language cybercrime community BreachForums that they’d stolen files belonging to some of Finastra’s largest banking clients. The data auction did not specify a starting or “buy it now” price, but said interested buyers should reach out to them on Telegram.

abyss0’s Nov. 7 sales thread on BreachForums included many screenshots showing the file directory listings for various Finastra customers. Image: Ke-la.com.

According to screenshots collected by the cyber intelligence platform Ke-la.com, abyss0 first attempted to sell the data allegedly stolen from Finastra on October 31, but that earlier sales thread did not name the victim company. However, it did reference many of the same banks called out as Finastra customers in the Nov. 8 post on BreachForums.

The original October 31 post from abyss0, where they advertise the sale of data from several large banks that are customers of a large financial software company. Image: Ke-la.com.

The October sales thread also included a starting price: $20,000. By Nov. 3, that price had been reduced to $10,000. A review of abyss0’s posts to BreachForums reveals this user has offered to sell databases stolen in several dozen other breaches advertised over the past six months.

The apparent timeline of this breach suggests abyss0 gained access to Finastra’s file sharing system at least a week before the company says it first detected suspicious activity, and that the Nov. 7 activity cited by Finastra may have been the intruder returning to exfiltrate more data.

Maybe abyss0 found a buyer who paid for their early retirement. We may never know, because this person has effectively vanished. The Telegram account that abyss0 listed in their sales thread appears to have been suspended or deleted. Likewise, abyss0’s account on BreachForums no longer exists, and all of their sales threads have since disappeared.

It seems improbable that both Telegram and BreachForums would have given this user the boot at the same time. The simplest explanation is that something spooked abyss0 enough for them to abandon a number of pending sales opportunities, in addition to a well-manicured cybercrime persona.

In March 2020, Finastra suffered a ransomware attack that sidelined a number of the company’s core businesses for days. According to reporting from Bloomberg, Finastra was able to recover from that incident without paying a ransom.

This is a developing story. Updates will be noted with timestamps. If you have any additional information about this incident, please reach out to krebsonsecurity @ gmail.com or at protonmail.com.

Planet DebianArnaud Rebillout: Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1
ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue

TL;DR

pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig

Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done! ✨ 🌟 ✨

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible
/home/me/.local/bin/ansible

$ ansible --version | head -1
ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | SUCCESS => {
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
}

# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | FAILED! => {
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
}

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done! ✨ 🌟 ✨

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

,

Planet DebianAurelien Jarno: AI crawlers should be smarter

It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.

Hint: Changing the display options does not provide more training data!

Planet DebianMelissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.

Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.

Short on Time?

  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.

TL;DR

This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):

  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.

Bringing together Linux display developers in the XDC 2024

While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.

Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)

Link: https://indico.freedesktop.org/event/6/contributions/383/

Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.

The final agenda covered five topics in the scheduled order:

  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).

Unpacking the Topics

Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.

From his notes, let’s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components.

Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.

  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.

Real-Time Scheduling Challenges

We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.

  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).

HDR/Color Management

This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.

Here’s a breakdown of the key points raised at this meeting:

  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.

Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!

Display Mux

During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.

  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback

In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.

  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.

To address this issue, we discussed several potential improvements:

  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.

By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You!

Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.

This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.

Stay tuned for future updates!

365 TomorrowsThe Minbar of Saladin

Author: Majoki “It was the most beautiful thing ever crafted.” “I’m sure it was, Akharini. But how can we steal it if it was destroyed almost seventy years ago?” Akharini stared at Nur. Though the hour was late and time was short, he wanted to tell him so much about the minbar of Saladin, of […]

The post The Minbar of Saladin appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Recursive Search

Sometimes, there's code so bad you simply know it's unused and never called. Bernard sends us one such method, in Java:

  /**
   * Finds a <code>GroupEntity</code> by group number.
   *
   * @param  group the group number.
   * @return the <code>GroupEntity</code> object.
   */
  public static GroupEntity find(String group) {
    return GroupEntity.find(group);
  }

This is a static method on the GroupEntity class called find, which calls a static method on the GroupEntity class called find, which calls a static method on the GroupEntity class called find and it goes on and on my friend.

Clearly, this is a mistake. Bernard didn't supply much more context, so perhaps the String was supposed to be turned into some other type, and there's an overload which would break the recursion. Regardless, there was an antediluvian ticket on the backlog requesting that the feature to allow finding groups via a search input that no one had yet worked on.

I'm sure they'll get around to it, once the first call finishes.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Cryptogram Why Italy Sells So Much Spyware

Interesting analysis:

Although much attention is given to sophisticated, zero-click spyware developed by companies like Israel’s NSO Group, the Italian spyware marketplace has been able to operate relatively under the radar by specializing in cheaper tools. According to an Italian Ministry of Justice document, as of December 2022 law enforcement in the country could rent spyware for €150 a day, regardless of which vendor they used, and without the large acquisition costs which would normally be prohibitive.

As a result, thousands of spyware operations have been carried out by Italian authorities in recent years, according to a report from Riccardo Coluccini, a respected Italian journalist who specializes in covering spyware and hacking.

Italian spyware is cheaper and easier to use, which makes it more widely used. And Italian companies have been in this market for a long time.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.

Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)

  • Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)

    • Faster handling of symmetric matrices by inv() and rcond()

    • Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()

    • Added solve_opts::force_sym option to solve() to force the use of the symmetric solver

    • More efficient handling of compound expressions by solve()

  • Added exporter specialisation for icube for the ARMA_64BIT_WORD case

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David BrinSo, what lessons did we learn? And what does the future hold?

Amid the all the hand-wringing, or wailing jeremiads, or triumphant op-eds out there, I’ll offer in this election post-mortem some perspectives that you’ll not see elsewhere. 

      But, before that - some flash bits.

 

First: a few have suggested Biden resign to make Kamala president for a month or so. Other than shifting Trump v.2.0’s number from 47 to 48, it would only bias 2028 unnecessarily, by locking her in as heir. Nah.

 

Second. I reiterate, there is one thing that Joe Biden could do – right now – that would upset the DC apple cart, and (presumably) be very much not to the Trump-Putin party’s liking. Last week I laid out how Biden might still – even now - affect the USA and world. And human destiny.

Third flash bit … how about some prediction cred? That Donald Trump has learned to never again appoint professionals or adults to office. Nearly all grownups from Trump v.1 (over 250 of them) eventually denounced him. (That one fact alone should have decided the election.) Sure enough, his announced cabinet appointments are almost all unqualified maniacs. But there’s a triple purpose to that – which I’ll describe at the end.

 

But that’s not what you came here for, today. You’ve been wallowing in election post-mortems, agonizing over how it could have come to this. 

 

So, after that wallow in futility and clichés, would you like some fresh views that actually bear some relation to reality? Some may disturb you.

 

 

== So… um… W.T.H. just happened? ==

 

 As VoteVets.org (Veterans dedicated to democracy and the rule of law) put it Wednesday: Moving forward and regaining the initiative requires us to confront the results of this election with open eyes.” 

 

Okay, for a start, it does no good to wail stuff like: “Americans chose a fascist dictatorship because trans kids are icky. And we hate the idea of a black woman being president.”

 

Um, not even close. Nor was Kamala Harris a ‘bad candidate’ (she was actually very good!) Nor was it because she ‘only had 107 days.’ Seriously? The campaign lasted forever!

 

Indeed, all over the globe (for the first time, ever), every governing party in a democracy lost vote share. So… maybe the Stupid-Ray Beamed By Aliens theory should be trotted back out? No, never mind that.

 

WTH actually happened?

 

Well, the demographics are clear. Major historically-Democratic groups simply did not show up, or else actively went to the GOP. While some Black men defected for reasons such as macho, they were mostly loyal, in the end. 


But Hispanics far more crucially (and of both sexes) stayed home or switched sides. And will you drop the racist slander, claiming that they’re all sexist? The new president of Mexico is a woman, fer gosh sakes. 

         

As for the trans thing, it was just one of so many hatred dog whistles. Useful to Fox and crappily/badly countered by the left. But it’s a side dish, compared to the Hispanic defection. 


Plus the fact that even white women split more evenly than expected.

 

Then what happened? 

 

TWO WORDS kind of sum it up! Two different one-word chants! Used repeatedly. One by each side.

 

For one side, that word was abortion.

 

Sure, the incredibly aggressive fascist putsch against Roe-v.-Wade and women’s Right-To-Choose was obscene and deserved to be deeply motivating. 

        Only then the Harris campaign transformed it from being a political gift by the Supreme Court into a liability. From being just one useful word into a million

Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion. Abortion! And… Abortion!  (ad infinitum)

 

Dig it. All of the folks for whom that word was a deciding issue were already Kamala’s! Repeating it relentlessly and always - like an incantatory talisman - only made clear that hers would be a campaign exactly like Hillary Clinton’s -- led by and run by and aimed toward upper middle class white ladies. 

 

(And please? I voted for both Harris & Clinton and worked hard for them, in 2016, and again in 2024. We’re discussing tactics, here! And getting angry when failed tactics are criticized is a sure sign of a bad general.) 

 

Try asking outside that bell jar. After a while, each repetition (“abortion!!”) became tedious and hectoring to many. 


Especially to Hispanics, who -- may I remind you -- are mostly Catholics?

Who are capable of diverging from church doctrine… but did they need to be reminded of that cognitive dissonance hourly?

 

…Hispanic citizens who also proved very receptive to the other side’s talisman word. 

 

 ‘Immigration.’ 

 

This talisman worked to draw in support from fresh directions. Poll after poll show Hispanics want tighter borders! Yet, urban liberals refused to listen. Pompously finger-wagging that both Jesus and Deuteronomy preach kindness (they do!) toward the flood of refugees who are fleeing Trump ally elites in Honduras and Guatemala, they endlessly lectured and preached that the flood should ONLY be answered with kindness… and nothing but kindness.

 

… which is easy for you rich liberals to say. But Hispanic voters don’t want job competition. And your disapproval – calling them immoral when they shouted “No!” – helped to cement their shift.

 

 

== Immigration as a weapon vs. the West: It isn’t just America. ==

 

Did you ever wonder why right wing populism has surged in Europe?  Quasi-Nazism burgeoned there – and here – for one reason above all. Because Putin & pals have been driving refugees across western borders for 30 years, knowing that it’ll result – perfectly – in a rightward swerve of politics. 

 

You know this! It happened here. The tactic has now won Vladimir Putin the greatest victory of his life… that very likely saved his life! 

 

But you, yes you, have been unable to see it and draw two correct conclusions

 

First: you can’t have everything you want, not all at once. Politics requires prioritization. And hence when Obama and Biden built more border walls than Trump ever did, they ought to have bragged about it! And you should have bragged, too.

        Again, you cannot do all the good things on your list without POWER! And now, sanctimoniously refusing to prioritize has given total power to…

 

Second: and here’s a potential sweet spot for you: Want to solve the immigration crisis in the best way, to benefit both us and the poor refugees? 

 

Go after the Honduran/Guatemalan/Nicaraguan/Cuban/Venezuelan elites who are persecuting their own citizens and driving them – coordinated by Moscow - to flee to the U.S! 

 

Um, duh? Joe could still do that in his remaining time. He could! 

     But a blind spot is a blind spot…

     … and even now, YOU probably could not parse or paraphrase what I just said. About the possible win-win sweet spot. Go ahead and try. Bet you can’t even come close.

 

How much simpler to dismiss Brin as racist. And thus illustrate my point.

 

 

== More lessons learned… or not learned? ==

 

Polls showed that ECONOMICS were far more on peoples’ minds than abortion. In fact, in almost every meaningful category, the USA has, right now, the best economy in the whole world and one of the very best since WWII.  


 

  

 

Oh sure, saying that was difficult for Democrats. It comes across as lectury, pedantic and tone deaf to those working class folks who have no rising 401K, but do have high grocery bills. Or to young families staring at skyrocketing housing prices. Meanwhile, everyone is so accustomed to a great labor market that unemployment is a forgotten issue,

 

But does that mean to give up?

In fact, Kamala tried to get across this difficult perception gap by promising to end gouging at supermarkets and pharmacies and bragging about lowered insulin costs. But all of that seems to admit we failed, till now. So, maybe accompany all that with ads showing all the bridges and other infrastructure being rebuilt, at last, and asking “Do you know someone working at fixing America this way? Ask THEM about it!”

 

I found the rebirth of US manufacturing - the biggest boom since WWII – to be especially effective.  

 

As for housing costs, I never saw one attempt to blame it on real culprits - swarms of inheritance brats and their trusts who are snapping up houses right and left in cash purchases, free to ignore mortgage rates. I mean seriously?

 

Okay, I admit it’s hard to sell cynical working stiffs glued to Fox on the Good Economy. I won’t carp too much on that. Instead…

 

Of course, there’s so much anger around and someone is gonna receive it. So notice that the core Foxite campaign – pushed VASTLY more often than any message of racism or sexism – is to blame college folks with incited hatred of non-college folks. 

 

As I’ll say again, Kamala could have started changing this by pointing over there (as FDR did) at the real class enemies. The oligarchs who benefited from 40+ years of supply side and suck like lampreys from the hardworking U.S. middle class… both college and non-college.

 

 

 

== The insult that they deeply resent… repeated over and over again ==

 

Not one Democratic pol has ever pointed out that racism and sexism, while important ingredients in parts of the Red polity, are not their core agenda!  

 

Indeed, count how many of your friends and/or favorite pundits are ascribing this recent calamity to ‘embedded American racism and sexism!!”  

 

Sure, those despicable traits exist and matter a lot. And it’s easy for me to downgrade them when my life is in no danger because of a busted tail-light. 

 

Still, can you recognize an unhelpful mantra, when it is repeated way too much, crowding out all other thought?

 

As commentator Jamie Metzl put it: “There will be some people who awoke this morning telling themselves that the story of this election is primarily one of racism and misogyny. They are wrong. 

 

“Make no mistake, our country still harbors unacceptable levels of both, but that is not the story of this election. That is not who we are as a nation. We are the same nation that elected Barack Obama twice and would have likely have elected Nikki Haley, had she been the Republican candidate. Very many women and minorities voted for Trump. We need to look deeper.”

 

Indeed, howling “You’re racist!” at our red neighbors was likely counterproductive. They turn and point at black faces on Fox… and at the now-boring normality of inter-racial marriages… and more recently at ‘normal gays’… and reply

 

“I don’t FEEL racist!  I LIKE the good ones!” 

 

 None of that means racism is over! But except for a nasty-Nazi wing, they have largely shifted on a lot of things. What it does mean is that a vast majority of Republicans feel insulated from their racism. 

 

it means that shrieking the R word over and over can be futile.It only makes your neighbors dig in and call you the race-obsessed oppressor.

 

 

==  The actual enemy ==

 

I mean, have you ever actually watched FOX and tallied their openly racist or even sexist crap… versus who it is they actually assail most often, and openly? Care for a side bet on this?

 

I’ve said it before and will pound this till you listen. While they downplay their own racism and sexism, what almost every MAGA Nuremberg Rally does feature is endless – and utterly open -- attacks upon nerds.

 

The fact professions. From journalism to science to civil service to law to medicine to the FBI and the US military officer corps. 

        And NO Democrat has ever addressed that, head on! 

        Ever, ever, ever and ever.

        Instead of ever pointing this out, they assume that defending the fact professions will sound like smugness bragging. 

 

But they’re wrong. And there are reasons why the all out assault on all fact-professions is the core agenda underlying everything Republican/MAGA/Putinist.  

        And someday – as I mention below – this will at last be admitted openly. 

       Alas, too late, as beginning on day one of Trump v2 there will commence an all-out terror campaign against that officer corps, against the FBI and especially the United States Civil Service. And science.

       And when that happens, point proudly and tell your children: “I helped do that.”

 

 

== The giddy joy in Moscow ==

 

Oh, there are MAGAs who write to me – on social media etc. - taunting and gloating about all this. To which I reply: 

 

“Enjoy your victory. Your pleasure is as a firefly glow, next to the giddy ecstasy in the Kremlin.”

  



 

A few comments furthering that.

 

a.     Jill Stein deserves the Order of Lenin. She likely already has one in her secret Moscow luxury dacha.

 

b.    Recall the Four Scenarios I projected, last Sunday? As I predicted in scenario 4. If Trump wins convincingly, he will surround himself with loyalists and this time no ‘adults in the room.’ No Kelly, Mattis, Esper, Milley, or even hacks with a brain, like Barr or Tillerson or Pence.  

  

c.     What no one else has mentioned - or will - is how this cuts all puppet strings to Trump. Nothing Putin has on him… no pee tape, or snuff film, or video with Epstein kids… will matter anymore. Nor will blackmail files on Cruz or many others. All – even Lindsey Graham – will have their “I could shoot someone on 5th Avenue” moment. And when that happens…



 

        …there is a very real chance that Trump will feel liberated to tell Vlad to fuck himself. Or even take revenge on Putin for decades of puppetting control threats. I have repeatedly asked folks to learn from the wisdom of Hollywood! From the last ten minutes of CABARET. From Angela Lansbury’s soliloquy in the last half hour of THE MANCHURIAN CANDIDATE. From the oligarchs’ last resort in NETWORK. 

 

But no. I am dreaming. Putin will retain control. Even if blackmail ceases to work, there’s flattery, which is far more important to DT than anything else in the world. And liberals insanely ignore that.

 

 

== How will America respond to this Confederate conquest? ==

 

 

One of you, in our lively comments section below, said: 

Trump is who we are and we are not the great people we used to be.”



 

Malarkey. As I described here today, midway through this phase of the ever-recurring Civil War, it seems the Union side’s generals keep firing in mistaken directions. But do not look at this as “America is irredeemable.” 

 

View it as “America has once again been conquered by the entirely separate Confederacy, the recurring romantic-feudalist cult that now has its Russian and other foreign allies. Actual America is now occupied by that other entity."

 

But recall that it consists of almost all of the inhabitants of this land who know stuff. And if our mad cousins push too hard, mass resignations by the competent will only be the beginning.

 

And no, - even though we are the ones who know everything cyber, bio, medical, nano, nuclear and all that, it won’t come to any of that. 

 

Watch instead – if they go after the civil service and officer corps - for the words GENERAL STRIKE. And let’s see how they do without (among 10,000 other taken-for-granted things) their ten day weather reports. Especially when the parts of North America that will be the very worst-hit by climate Acts of God will be the Southeast.

 


== Why are there zero 'adults' in the newest DT administration? ==


      I promised to explain why Trump's announced cabinet appointments are almost all unqualified maniacs. There’s a triple purpose


1. This will maximize flattery (what Trump lives for), but this time he's only chosen appointees who are blackmailed and controllable. In some cases, Russian assets now appointed atop U.S. intel and defense agencies.


2.  Unlike all the 'adults' in Trump v.1, this time every single person named will join in the coming all-out war against the FBI, military officer corps and U.S. Civil Service.


3. Unlike all the 'adults' in Trump v.1 none of these will never denounce him in tell-all books.



== Side bits ==

 

Tariffs? Oh, dear oligarchs, try some wisdom from a surprising portion of Ferris Beuller’s Day Off! 


John Cramer points out that Joe Biden, as part of the Great Big Infrastructure Rebuild, boosted access of poor and rural areas to high speed Internet... "There is evidence that better access to the many disinformation sites shifted many rural counties from pink to deep red."


Also Cramer: "Botched Trumpian responses made Covid far worse. (And the best way for you to begin using wager demands would ber to demand cash bets over DEATH RATES for the vaccinated vs. un-vaccinated.) "When COVID hit, Trump arranged to sign the big relief checks. Under Biden (who din't sign the checks) this tapered too soon. Strapped voters remembered the "good old days" when Trump sent checks and the grocery prices were lower." Hm, that seems a reach but...


Above all I reiterate, there is one thing that Joe Biden could do – right now – that would upset the DC apple cart, and (presumably) be very much not to the Trump-Putin party’s liking. Last week I laid out how Biden might still – even now - affect the USA and world. And human destiny.

 


 

== So, what lessons did we learn?  And what does the future hold?  ==

 

Geez, you’re asking me? My predictive score is way above average, but I truly thought a critical mass of Americans would reject an orange-painted, makeup-slathered raving-loony carnival barker. I was wrong about that…

 

… but it only shows how stoopid so many millions of sincerely generous and college -educated Americans are, for assuming they know who the gone-treasonously-mad right is oppressing.

 

Wake up, educated gals & guys and gays and every other variant under the sun. It’s not your diversity they are coming after. Nor the client races and genders you defend, many of whom just said ‘fuck off!’ to being protected by you.

 

The oligarchs and their minions have one enemy they aim to destroy.

 

It’s you.


Planet DebianC.J. Collier: Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \
      | gpg --no-default-keyring --keyring "${KR}" --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "${KR}" 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)

pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1

pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
  | sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status

HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None

$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail

HPE Smart Array P408i-p SR Gen10 in Slot 3

   Array A

      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238

Planet DebianPhilipp Kern: debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

Cryptogram Most of 2023’s Top Exploited Vulnerabilities Were Zero-Days

Zero-day vulnerabilities are more commonly used, according to the Five Eyes:

Key Findings

In 2023, malicious cyber actors exploited more zero-day vulnerabilities to compromise enterprise networks compared to 2022, allowing them to conduct cyber operations against higher-priority targets. In 2023, the majority of the most frequently exploited vulnerabilities were initially exploited as a zero-day, which is an increase from 2022, when less than half of the top exploited vulnerabilities were exploited as a zero-day.

Malicious cyber actors continue to have the most success exploiting vulnerabilities within two years after public disclosure of the vulnerability. The utility of these vulnerabilities declines over time as more systems are patched or replaced. Malicious cyber actors find less utility from zero-day exploits when international cybersecurity efforts reduce the lifespan of zero-day vulnerabilities.

Worse Than FailureCodeSOD: Objectified

Simon recently found himself working alongside a "very senior" developer- who had a whopping 5 years of experience. This developer was also aggrieved that in recent years, Object Oriented programming had developed a bad reputation. "Functional this, functional that, people really just don't understand how clean and clear objects make your code."

For example, here are a few Java objects which they wrote to power a web scraping tool:

class UrlHolder {

    private String url;

    public UrlHolder(String url) {
        this.url = url;
    }
}

class UrlDownloader {

    private UrlHolder url;
    public String downloadPage;

    public UrlDownLoader(String url) {
        this.url = new UrlHolder(Url);
    }
}

class UrlLinkExtractor {

   private UrlDownloader url;

   public UrlLinkExtractor(UrlDownloader url) {
        this.url = url;
   }

   public String[] extract() {
       String page = Url.downloadPage;
       ...
   }
}

UrlHolder is just a wrapper around string, but also makes that string private and provides no accessors. Anything shoved into an instance of that may as well be thrown into oblivion.

UrlDownloader wraps a UrlHolder, again, as a private member with no accessors. It also has a random public string called downloadPage.

UrlLinkExtractor wraps a UrlDownloader, and at least UrlLinkExtractor has a function- which presumably downloads the page. It uses UrlDownloader#downloadPage- the public string property. It doesn't use the UrlHolder, because of course it couldn't. The entire goal of this code is to pass a string to the extract function.

I guess I don't understand object oriented programming. I thought I did, but after reading this code, I don't.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsOn the Slagpiles of Mars

Author: Julian Miles, Staff Writer The wind’s picking up, keening between the stacks. If it gets any stronger, we’ll have to retreat. “Alpha Seven, your favourite scout’s offline.” “How long?” “Nearly ten minutes.” Switching myself to wide-hail, I call out. “Team Seven, Scully’s dropped out. Who had last contact?” There’s rapid chatter back and forth. […]

The post On the Slagpiles of Mars appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Delilah Green Doesn't Care

Review: Delilah Green Doesn't Care, by Ashley Herring Blake

Series: Bright Falls #1
Publisher: Jove
Copyright: February 2022
ISBN: 0-593-33641-0
Format: Kindle
Pages: 374

Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.

Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.

Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.

Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.

I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.

It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.

This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.

This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.

The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.

This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.

I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.

Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.

Followed by Astrid Parker Doesn't Fail.

Rating: 9 out of 10

,

Planet DebianRuss Allbery: Review: Dark Deeds

Review: Dark Deeds, by Michelle Diener

Series: Class 5 #2
Publisher: Eclipse
Copyright: January 2016
ISBN: 0-6454658-4-4
Format: Kindle
Pages: 340

Dark Deeds is the second book of the self-published Class 5 science fiction romance series. It is a sequel to Dark Horse and will spoil the plot of that book, but it follows the romance series convention of switching to a new protagonist in the same universe and telling a loosely-connected story.

Fiona, like Rose in the previous book, was kidnapped by the Tecran in one of their Class 5 ships, although that's not entirely obvious at the start of the story. The book opens with her working as a slave on a Garmman trading ship while its captain works up the nerve to have her killed. She's spared this fate when the ship is raided by Krik pirates. Some brave fast-talking, and a touch of honor among thieves, lets her survive the raid and be rescued by a pursuing Grih battleship, with a useful electronic gadget as a bonus.

The author uses the nickname "Fee" for Fiona throughout this book and it was like nails on a chalkboard every time. I had to complain about that before getting into the review.

If you've read Dark Horse, you know the formula: lone kidnapped human woman, major violations of the laws against mistreatment of sentient beings that have the Grih furious on her behalf, hunky Grih starship captain who looks like a space elf, all the Grih are fascinated by her musical voice, she makes friends with a secret AI... Diener found a formula that worked well enough that she tried it again, and it would not surprise me if the formula repeated through the series. You should not go into this book expecting to be surprised.

That said, the formula did work the first time, and it largely does work again. I thoroughly enjoyed Dark Horse and wanted more, and this is more, delivered on cue. There are worse things, particularly if you're a Kindle Unlimited reader (I am not) and are therefore getting new installments for free. The Tecran fascination with kidnapping human women is explained sufficiently in Fiona's case, but I am mildly curious how Diener will keep justifying it through the rest of the series. (Maybe the formula will change, but I doubt it.)

To give Diener credit, this is not a straight repeat of the first book. Fiona is similar to Rose but not identical; Rose had an unshakable ethical calm, and Fiona is more of a scrapper. The Grih are not stupid and, given the amount of chaos Rose unleashed in the previous book, treat the sudden appearance of another human woman with a great deal more caution and suspicion. Unfortunately, this also means far less of my favorite plot element of the first book: the Grih being constantly scandalized and furious at behavior the protagonist finds sadly unsurprising.

Instead, this book has quite a bit more action. Dark Horse was mostly character interactions and tense negotiations, with most of the action saved for the end. Dark Deeds replaces a lot of the character work with political plots and infiltrating secret military bases and enemy ships. The AI (named Eazi this time) doesn't show up until well into the book and isn't as much of a presence as Sazo. Instead, there's a lot more of Fiona being drafted into other people's fights, which is entertaining enough while it's happening but which wasn't as delightful or memorable as Rose's story.

The writing continues to be serviceable but not great. It's a bit cliched and a bit awkward.

Also, Diener uses paragraph breaks for emphasis.

It's hard to stop noticing it once you see it.

Thankfully, once the story gets going and there's more dialogue, she tones that down, or perhaps I stopped noticing. It's that kind of book (and that kind of series): it's a bit rough to get started, but then there's always something happening, the characters involve a whole lot of wish-fulfillment but are still people I like reading about, and it's the sort of unapologetic "good guys win" type of light science fiction that is just the thing when one simply wants to be entertained. Once I get into the book, it's easy to overlook its shortcomings.

I spent Dark Horse knowing roughly what would happen but wondering about the details. I spent Dark Deeds fairly sure of the details and wondering when they would happen. This wasn't as fun of an experience, but the details were still enjoyable and I don't regret reading it. I am hoping that the next book will be more of a twist, or will have a character more like Rose (or at least a character with a better nickname). Sort of recommended if you liked Dark Horse and really want more of the same.

Followed by Dark Minds, which I have already purchased.

Rating: 6 out of 10

365 TomorrowsThe Spark

Author: Alastair Millar The Government Men arrived in the early morning, before Papa had even left for work. Mama, crying, sat in the kitchen listening to the voices in the living room; it was only her restraining hand that prevented her daughter Cassie, home for college vacation, from storming in to join the discussion. “You […]

The post The Spark appeared first on 365tomorrows.

,

365 TomorrowsTerminal Lucidity

Author: Don Nigroni Yesterday on Christmas Day, I was at my filthy rich, albeit eccentric, uncle’s house. And that’s when and where everything went awry. After dinner, he took me aside to his library to enjoy a cigar and a tawny port. “We know our current materialistic paradigm is pure garbage, yet we still cling […]

The post Terminal Lucidity appeared first on 365tomorrows.

,

365 TomorrowsBlood In The Water

Author: Nell Carlson The girl died. Normally, that would have been the end of it. Thousands of people died every day and millions had died in The Culling and nothing especially unusual happened afterwards. But the girl had died on the black river at the same time millions of people had been praying in remembrance […]

The post Blood In The Water appeared first on 365tomorrows.

Worse Than FailureError'd: Tangled Up In Blue

...Screens of Death. Photos of failures in kiosk-mode always strike me as akin to the wizard being exposed behind his curtain. Yeah, that shiny thing is after all just some Windows PC on a stick. Here are a few that aren't particularly recent, but they're real.

Jared S. augurs ill: "Seen in downtown Mountain View, CA: In Silicon Valley AI has taken over. There is no past, there is no future, and strangely, even the present is totally buggered. However, you're free to restore the present if you wish."

0

 

Windows crashed Maurizio De Cecco's party and he is vexé. "Some OS just doesn’t belong in the parisian nightlife," he grumbled. But neither does pulled pork barbecue and yet there it is.

1

 

Máté cut Windows down cold. "Looks like the glaciers are not the only thing frozen at Matterhorn Glacier Paradise..."

2

 

Thomas found an installer trying to apply updates "in the Northwestern University's visitor welcome center, right smack in the middle of a nine-screen video display. I can only imagine why they might have iTunes or iCloud installed on their massive embedded display." I certainly can't.

3

 

Finally, Charles T. found a fast-food failure and was left entirely wordless. And hungry.

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Krebs on SecurityAn Interview With the Target & Home Depot Hacker

In December 2023, KrebsOnSecurity revealed the real-life identity of Rescator, the nickname used by a Russian cybercriminal who sold more than 100 million payment cards stolen from Target and Home Depot between 2013 and 2014. Moscow resident Mikhail Shefel, who confirmed using the Rescator identity in a recent interview, also admitted reaching out because he is broke and seeking publicity for several new money making schemes.

Mikhail “Mike” Shefel’s former Facebook profile. Shefel has since legally changed his last name to Lenin.

Mr. Shefel, who recently changed his legal surname to Lenin, was the star of last year’s story, Ten Years Later, New Clues in the Target Breach. That investigation detailed how the 38-year-old Shefel adopted the nickname Rescator while working as vice president of payments at ChronoPay, a Russian financial company that paid spammers to advertise fake antivirus scams, male enhancement drugs and knockoff pharmaceuticals.

Mr. Shefel did not respond to requests for comment in advance of that December 2023 profile. Nor did he respond to reporting here in January 2024 that he ran an IT company with a 34-year-old Russian man named Aleksandr Ermakov, who was sanctioned by authorities in Australia, the U.K. and U.S. for stealing data on nearly 10 million customers of the Australian health insurance giant Medibank.

But not long after KrebsOnSecurity reported in April that Shefel/Rescator also was behind the theft of Social Security and tax information from a majority of South Carolina residents in 2012, Mr. Shefel began contacting this author with the pretense of setting the record straight on his alleged criminal hacking activities.

In a series of live video chats and text messages, Mr. Shefel confirmed he indeed went by the Rescator identity for several years, and that he did operate a slew of websites between 2013 and 2015 that sold payment card data stolen from Target, Home Depot and a number of other nationwide retail chains.

Shefel claims the true mastermind behind the Target and other retail breaches was Dmitri Golubov, an infamous Ukrainian hacker known as the co-founder of Carderplanet, among the earliest Russian-language cybercrime forums focused on payment card fraud. Mr. Golubov could not be reached for comment, and Shefel says he no longer has the laptop containing evidence to support that claim.

Shefel asserts he and his team were responsible for developing the card-stealing malware that Golubov’s hackers installed on Target and Home Depot payment terminals, and that at the time he was technical director of a long-running Russian cybercrime community called Lampeduza.

“My nickname was MikeMike, and I worked with Dmitri Golubov and made technologies for him,” Shefel said. “I’m also godfather of his second son.”

Dmitri Golubov, circa 2005. Image: U.S. Postal Investigative Service.

A week after breaking the story about the 2013 data breach at Target, KrebsOnSecurity published Who’s Selling Cards from Target?, which identified a Ukrainian man who went by the nickname Helkern as Rescator’s original identity. But Shefel claims Helkern was subordinate to Golubov, and that he was responsible for introducing the two men more than a decade ago.

“Helkern was my friend, I [set up a] meeting with Golubov and him in 2013,” Shefel said. “That was in Odessa, Ukraine. I was often in that city, and [it’s where] I met my second wife.”

Shefel claims he made several hundred thousand dollars selling cards stolen by Golubov’s Ukraine-based hacking crew, but that not long after Russia annexed Crimea in 2014 Golubov cut him out of the business and replaced Shefel’s malware coding team with programmers in Ukraine.

Golubov was arrested in Ukraine in 2005 as part of a joint investigation with multiple U.S. federal law enforcement agencies, but his political connections in the country ensured his case went nowhere. Golubov later earned immunity from prosecution by becoming an elected politician and founding the Internet Party of Ukraine, which called for free internet for all, the creation of country-wide “hacker schools” and the “computerization of the entire economy.”

Mr. Shefel says he stopped selling stolen payment cards after being pushed out of the business, and invested his earnings in a now-defunct Russian search engine called tf[.]org. He also apparently ran a business called click2dad[.]net that paid people to click on ads for Russian government employment opportunities.

When those enterprises fizzled out, Shefel reverted to selling malware coding services for hire under the nickname “Getsend“; this claim checks out, as Getsend for many years advertised the same Telegram handle that Shefel used in our recent chats and video calls.

Shefel acknowledged that his outreach was motivated by a desire to publicize several new business ventures. None of those will be mentioned here because Shefel is already using my December 2023 profile of him to advertise what appears to be a pyramid scheme, and to remind others within the Russian hacker community of his skills and accomplishments.

Shefel says he is now flat broke, and that he currently has little to show for a storied hacking career. The Moscow native said he recently heard from his ex-wife, who had read last year’s story about him and was suddenly wondering where he’d hidden all of his earnings.

More urgently, Shefel needs money to stay out of prison. In February, he and Ermakov were arrested on charges of operating a short-lived ransomware affiliate program in 2021 called Sugar (a.k.a. Sugar Locker), which targeted single computers and end-users instead of corporations. Shefel is due to face those charges in a Moscow court on Friday, Nov. 15, 2024. Ermakov was recently found guilty and given two years probation.

Shefel claims his Sugar ransomware affiliate program was a bust, and never generated any profits. Russia is known for not prosecuting criminal hackers within its borders who scrupulously avoid attacking Russian businesses and consumers. When asked why he now faces prosecution over Sugar, Shefel said he’s certain the investigation was instigated by  Pyotr “Peter” Vrublevsky — the son of his former boss at ChronoPay.

ChronoPay founder and CEO Pavel Vrublevsky was the key subject of my 2014 book Spam Nation, which described his role as head of one of Russia’s most notorious criminal spam operations.

Vrublevsky Sr. recently declared bankruptcy, and is currently in prison on fraud charges. Russian authorities allege Vrublevsky operated several fraudulent SMS-based payment schemes. They also accused Vrublevsky of facilitating money laundering for Hydra, the largest Russian darknet market at the time. Hydra trafficked in illegal drugs and financial services, including cryptocurrency tumbling for money laundering, exchange services between cryptocurrency and Russian rubles, and the sale of falsified documents and hacking services.

However, in 2022 KrebsOnSecurity reported on a more likely reason for Vrublevsky’s latest criminal charges: He’d been extensively documenting the nicknames, real names and criminal exploits of Russian hackers who worked with the protection of corrupt officials in the Russian Federal Security Service (FSB), and operating a Telegram channel that threatened to expose alleged nefarious dealings by Russian financial executives.

Shefel believes Vrublevsky’s son Peter paid corrupt cops to levy criminal charges against him after reporting the youth to Moscow police, allegedly for walking around in public with a loaded firearm. Shefel says the Russian authorities told the younger Vrublevsky that he had lodged the firearms complaint.

In July 2024, the Russian news outlet Izvestia published a lengthy investigation into Peter Vrublevsky, alleging that the younger son took up his father’s mantle and was responsible for advertising Sprut, a Russian-language narcotics bazaar that sprang to life after the Hydra darknet market was shut down by international law enforcement agencies in 2022.

Izvestia reports that Peter Vrublevsky was the advertising mastermind behind this 3D ad campaign and others promoting the Russian online narcotics bazaar Sprut.

Izvestia reports that Peter Vrublevsky is currently living in Switzerland, where he reportedly fled in 2022 after being “arrested in absentia” in Russia on charges of running a violent group that could be hired via Telegram to conduct a range of physical attacks in real life, including firebombings and muggings.

Shefel claims his former partner Golubov was involved in the development and dissemination of early ransomware strains, including Cryptolocker, and that Golubov remains active in the cybercrime community.

Meanwhile, Mr. Shefel portrays himself as someone who is barely scraping by with the few odd coding jobs that come his way each month. Incredibly, the day after our initial interview via Telegram, Shefel proposed going into business together.

By way of example, he suggested maybe a company centered around recovering lost passwords for cryptocurrency accounts, or perhaps a series of online retail stores that sold cheap Chinese goods at a steep markup in the United States.

“Hi, how are you?” he inquired. “Maybe we can open business?”

,

Sociological ImagesWho’s Not Cool With AC?

This past summer was hot, hotter than it used to be, and this is causing a lot of new challenges for work, infrastructure, our social lives, and our health. Air conditioning was back in style and even a new public policy, with more cities working to require that landlords provide it as a basic part of a habitable apartment.

Of course the stakes are much higher than just a new AC unit. Sociologists have long known that unequal heat exposure is a serious challenge to our collective health and social wellbeing. Eric Klinenberg’s famous study of the 1995 Chicago heatwave, for example, found that social isolation was a key factor in explaining why people were vulnerable to heat sickness and even death, because they didn’t have places to go or people to check in on them to stay cool. Recent work has linked excessive heat to deaths among people who are incarcerated and learning loss in schools. Heat risks are unevenly distributed in our society, and so addressing the risks of a warmer planet is going to require expanded access to building cooling and air conditioning.

The challenge is that the status of air conditioning is changing. Heat has long been considered a necessity for safe, healthy living – often part of the basic, legal requirements for habitable homes on the rental market across the country. But states are much more inconsistent about whether they require air conditioning, which is often marketed to the general public as a “luxury good.” Look at any vintage ad for AC and you’ll find wealthy, well-dressed homeowners splurging on a new system that lets you wear a suit inside.

Do people today actually support aid to help others access cooling? In a new study recently published in Socius, I investigated this with an original survey experiment. In a sample of 1200 respondents drawn from Prolific, I asked about support for government utility assistance programs for people with lower incomes. The questions had a key difference: some respondents got a question about utility assistance in general, some got a question specifically about home heating, and some got a question specifically about home air conditioning.

Support for the heating question was the strongest on average, in line with the theory that we see heating as a necessity. Air conditioning received the lowest support, however, significantly different from both heat and general utility assistance in the sample. To make sure these results held, I went back to Prolific and sampled more Black and Hispanic respondents to repeat the experiment. The strongest results in these tests came from white respondents.

Why might this be the case? We have long known that attitudes about social welfare programs of all kinds are tied up with race. Research finds these differences because of stereotypical thinking – some people are deeply concerned that others who receive aid need to “deserve” it by working hard and only using aid on necessities, not luxuries. We also know that these beliefs are often linked to racial stereotypes. Previous work on food stamps, disaster relief, guaranteed income, and other social aid programs often finds these social forces at work.

These results show that stereotypical thinking about who “deserves” help may be an important public policy hurdle as we work on adapting to climate change. As policymakers face an increasing need for adequate cooling to address public health issues, they will need to account for the fact that the public may still be thinking of air conditioning as a luxury or comfort good. Making policy to survive climate change requires updating our thinking about the status of goods necessary to weather the crisis.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow his work at his website, or on BlueSky.

(View original at https://thesocietypages.org/socimages)

Cryptogram Subverting LLM Coders

Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“:

Abstract: Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To address this critical security challenge, we introduce CODEBREAKER, a pioneering LLM-assisted backdoor attack framework on code completion models. Unlike recent attacks that embed malicious payloads in detectable or irrelevant sections of the code (e.g., comments), CODEBREAKER leverages LLMs (e.g., GPT-4) for sophisticated payload transformation (without affecting functionalities), ensuring that both the poisoned data for fine-tuning and generated code can evade strong vulnerability detection. CODEBREAKER stands out with its comprehensive coverage of vulnerabilities, making it the first to provide such an extensive set for evaluation. Our extensive experimental evaluations and user studies underline the strong attack performance of CODEBREAKER across various settings, validating its superiority over existing approaches. By integrating malicious payloads directly into the source code with minimal transformation, CODEBREAKER challenges current security measures, underscoring the critical need for more robust defenses for code completion.

Clever attack, and yet another illustration of why trusted AI is essential.

Cryptogram AIs Discovering Vulnerabilities

I’ve been writing about the possibility of AIs automatically discovering code vulnerabilities since at least 2018. This is an ongoing area of research: AIs doing source code scanning, AIs finding zero-days in the wild, and everything in between. The AIs aren’t very good at it yet, but they’re getting better.

Here’s some anecdotal data from this summer:

Since July 2024, ZeroPath is taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing (SAST) tools were ill-equipped to find. This post provides a technical deep-dive into our research methodology and a living summary of the bugs found in popular open-source tools.

Expect lots of developments in this area over the next few years.

This is what I said in a recent interview:

Let’s stick with software. Imagine that we have an AI that finds software vulnerabilities. Yes, the attackers can use those AIs to break into systems. But the defenders can use the same AIs to find software vulnerabilities and then patch them. This capability, once it exists, will probably be built into the standard suite of software development tools. We can imagine a future where all the easily findable vulnerabilities (not all the vulnerabilities; there are lots of theoretical results about that) are removed in software before shipping.

When that day comes, all legacy code would be vulnerable. But all new code would be secure. And, eventually, those software vulnerabilities will be a thing of the past. In my head, some future programmer shakes their head and says, “Remember the early decades of this century when software was full of vulnerabilities? That’s before the AIs found them all. Wow, that was a crazy time.” We’re not there yet. We’re not even remotely there yet. But it’s a reasonable extrapolation.

EDITED TO ADD: And Google’s LLM just discovered an exploitable zero-day.

Cryptogram Good Essay on the History of Bad Password Policies

Stuart Schechter makes some good points on the history of bad password policies:

Morris and Thompson’s work brought much-needed data to highlight a problem that lots of people suspected was bad, but that had not been studied scientifically. Their work was a big step forward, if not for two mistakes that would impede future progress in improving passwords for decades.

First, was Morris and Thompson’s confidence that their solution, a password policy, would fix the underlying problem of weak passwords. They incorrectly assumed that if they prevented the specific categories of weakness that they had noted, that the result would be something strong. After implementing a requirement that password have multiple characters sets or more total characters, they wrote:

These improvements make it exceedingly difficult to find any individual password. The user is warned of the risks and if he cooperates, he is very safe indeed.

As should be obvious now, a user who chooses “p@ssword” to comply with policies such as those proposed by Morris and Thompson is not very safe indeed. Morris and Thompson assumed their intervention would be effective without testing its efficacy, considering its unintended consequences, or even defining a metric of success to test against. Not only did their hunch turn out to be wrong, but their second mistake prevented anyone from proving them wrong.

That second mistake was convincing sysadmins to hash passwords, so there was no way to evaluate how secure anyone’s password actually was. And it wasn’t until hackers started stealing and publishing large troves of actual passwords that we got the data: people are terrible at generating secure passwords, even with rules.

Planet DebianReproducible Builds: Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


Other tributes:

Planet DebianStefano Zacchiroli: In memory of Lunar

In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

365 TomorrowsDown Under

Author: Beck Dacus Each time the floor shuddered, all our chains rang like windchimes. The shackles around my ankles were linked to the wrists of the “inmate” behind me, on and on in a long line of us marching forward. As I stumbled I pulled on that man’s wrists, nearly bringing him down as well. […]

The post Down Under appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Secondary Waits

ArSo works at a small company. It's the kind of place that has one software developer, and ArSo isn't it. But ArSo is curious about programming, and has enough of a technical background that small tasks should be achievable. After some conversations with management, an arrangement was made: Kurt, their developer, would identify a few tasks that were suitable for a beginner, and would then take some time to mentor ArSo through completing them.

It sounded great, especially because Kurt was going to provide sample code which would give ArSo a head start on getting things done. What better way to learn than by watching a professional at work?

DateTime datTmp;

File.Copy(strFileOld, strFileNew);
// 2 seconds delay
datTmp = DateTime.Now;
while (datTmp.Second == DateTime.Now.Second);
datTmp = DateTime.Now;
while (datTmp.Second == DateTime.Now.Second);
File.Delete(strFileOld);

This code copies a file from an old path to a new path, and then deletes the old path after a two second delay. Why is there a delay? I don't know. Why is the delay written like this? I can't possibly explain that.

Check the time at the start of the loop. When the second part of that time stops matching the second part of the current time, we assume one second has passed. This is, of course, inaccurate- if I check the time at 0:00:00.9999 a lot less than a second will pass. This delay is at most one second.

In any case, ArSo has some serious questions about Kurt's mentorship, and writes:

Now I don't know if I should ask for more coding tasks.

Honestly, I think you should ask for more. Like, I think you should just take Kurt's job. You may be a beginner, but honestly, you're likely going to do a better job than this.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

,

Cryptogram New iOS Security Feature Makes It Harder for Police to Unlock Seized Phones

Everybody is reporting about a new security iPhone security feature with iOS 18: if the phone hasn’t been used for a few days, it automatically goes into its “Before First Unlock” state and has to be rebooted.

This is a really good security feature. But various police departments don’t like it, because it makes it harder for them to unlock suspects’ phones.

Planet DebianRussell Coker: Modern Sleep

Julius wrote an insightful blog post about the “modern sleep” issue with Windows [1]. Basically Microsoft decided that the right way to run laptops is to never entirely sleep, which uses more battery but gives better options for waking up and doing things. I agree with Microsoft in concept and this is something that is a problem that can be solved. A phone can run for 24+ hours without ever fully sleeping, a laptop has a more power hungry CPU and peripherals but also has a much larger battery so it should be able to do the same. Some of the reviews for Snapdragon Windows laptops claim up to 22 hours of actual work without charging! So having suspend not really stop the system should be fine.

The ability of a phone to never fully sleep is a change in quality of the usage experience, it means that you can access it and immediately have it respond and it means that all manner of services can be checked for new updates which may require a notification to the user. The XMPP protocol (AKA Jabber) was invented in 1999 which was before laptops were common and Instant Message systems were common long before then. But using Jabber or another IM system on a desktop was a very different experience to using it on a laptop and using it on a phone is different again. The “modern sleep” allows laptops to act like phones in regard to such messaging services. Currently I have Matrix IM clients running on my Android phone and Linux laptop, if I get a notification that takes much typing for a response then I get out my laptop to respond. If I had an ARM based laptop that never fully shut down I would have much less need for Matrix on a phone.

Making “modern sleep” popular will lead to more development of OS software to work with it. For Linux this will hopefully mean that regular Linux distributions (as opposed to Android which while running a Linux kernel is very different to Debian etc) get better support for such things and therefore become more usable on phones. Debian on a Librem 5 or PinePhonePro isn’t very usable due to battery life issues.

A laptop with an LTE card can be used for full mobile phone functionality. With “modern sleep” this is a viable option. I am tempted to make a laptop with LTE card and bluetooth headset a replacement for my phone. Some people will say “what if someone tries to call you when it’s not convenient to have your laptop with you”, my response is “what if people learn to not expect me to answer the phone at any time as they managed that in the 90s”. Seriously SMS or Matrix me if you want an instant response and if you want a long chat schedule it via SMS or Matrix.

Dell has some useful advice about how to use their laptops (and probably most laptops from recent times) in this regard [2]. You can’t close the lid before unplugging the power cable you have to unplug first and then close. You shouldn’t put a laptop in a sealed bag for travel either. This is a terrible situation, you can put a tablet in a bag and don’t need to take any special precautions when unplugging and laptops should work the same. The end result of what Microsoft, Dell, Intel, and others are doing will be good but they are making some silly design choices along the way! I blame Intel mostly for selling laptop CPUs with TDPs >40W!

For an amusing take on this Linus Tech Tips has a video about being forced to use MacBooks by Microsoft’s implementation of Modern Sleep [3].

I’ll try out some ARM laptops in the near future and blog about how well they work on Debian.

365 TomorrowsThe Trail

Author: Mark Renney The changeover hasn’t ever been subtle, but long ago, centuries ago, it wasn’t so difficult, so intense and all consuming. I think it’s fair to say that, back then, I rode roughshod, moving quickly from host to host. I would like to say I selected indiscriminately, but it wouldn’t be true. I […]

The post The Trail appeared first on 365tomorrows.

Worse Than FailureCodeSOD: The First 10,000

Alicia recently inherited a whole suite of home-grown enterprise applications. Like a lot of these kinds of systems, it needs to do batch processing. She went tracking down a mysterious IllegalStateException only to find this query causing the problem:

select * from data_import where id > 10000

The query itself is fine, but the code calling it checks to see if this query returned any rows- if it did, the code throws the IllegalStateException.

First, of course, this should be a COUNT(*) query- no need to actually return rows here. But also… what? Why do we fail if there are any transactions with an ID greater than 10000? Why on Earth would we care?

Well, the next query it runs is this:

update data_import set id=id+10000

Oh. Oh no. Oh nooooo. Are they… are they using the ID to also represent some state information about the status of the record? It sure seems like it!

The program then starts INSERTing data, using a counter which starts at 1. Once all the new data is added, the program then does:

delete from data_import where id > 10000

All this is done within a single method, with no transactions and no error handling. And yes, this is by design. You see, if anything goes wrong during the inserts, then the old records don't get deleted, so we can see that processing failed and correct it. And since the IDs are sequential and always start at 1, we can easily find which row caused the problem. Who needs logging or any sort of exception handling- just check your IDs.

The underlying reason why this started failing was because the inbound data started trying to add more than 10,000 rows, which meant the INSERTs started failing (since we already had rows there for this). Alicia wanted to fix this and clean up the process, but too many things depended on it working in this broken fashion. Instead, her boss implemented a quick and easy fix: they changed "10000" to "100000".

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff - November 2024

Our Debian User Group met on November 2nd after a somewhat longer summer hiatus than normal. It was lovely to see a bunch of people again and to be able to dedicate a whole day to hacking :)

Here is what we did:

lavamind:

  • reproduced puppetdb FTBFS #1084038 and reported the issue upstream
  • uploaded a new upstream version for pgpainless (1.6.8-1)
  • uploaded a new revision for ruby-moneta (1.6.0-3)
  • sent an inquiry to the backports team about #1081696

pollo:

  • reviewed & merged many lintian merge requests, clearing out most of the queue
  • uploaded a new lintian release (1.120.0)
  • worked on unblocking the revival of lintian.debian.org (many thanks to anarcat and pkern)
  • apparently (kindly) told people to rtfm at least 4 times :)

anarcat:

LeLutin:

  • opened an RFS on the ruby team mailing list for the new upstream version of ruby-necromancer
  • worked on packaging the new upstream version of ruby-pathspec

tvaz:

  • did AM (Application Manager) work

tassia:

  • explored the Debian Jr. project (website, wiki, mailing list, salsa repositories)
  • played a few games for Nico's entertainment :-)
  • built and tested a Debian Jr. live image

Pictures

This time around, we went back to Foulab. Thanks for hosting us!

As always, the hacklab was full of interesting stuff and I took a few (bad) pictures for this blog post:

Two old video cameras and a 'My First Sony' tape recorder An ALP HT-286 machine with a very large 'turbo' button A New Hampshire 'IPROUTE' vanity license plate

Krebs on SecurityMicrosoft Patch Tuesday, November 2024 Edition

Microsoft today released updates to plug at least 89 security holes in its Windows operating systems and other software. November’s patch batch includes fixes for two zero-day vulnerabilities that are already being exploited by attackers, as well as two other flaws that were publicly disclosed prior to today.

The zero-day flaw tracked as CVE-2024-49039 is a bug in the Windows Task Scheduler that allows an attacker to increase their privileges on a Windows machine. Microsoft credits Google’s Threat Analysis Group with reporting the flaw.

The second bug fixed this month that is already seeing in-the-wild exploitation is CVE-2024-43451, a spoofing flaw that could reveal Net-NTLMv2 hashes, which are used for authentication in Windows environments.

Satnam Narang, senior staff research engineer at Tenable, says the danger with stolen NTLM hashes is that they enable so-called “pass-the-hash” attacks, which let an attacker masquerade as a legitimate user without ever having to log in or know the user’s password. Narang notes that CVE-2024-43451 is the third NTLM zero-day so far this year.

“Attackers continue to be adamant about discovering and exploiting zero-day vulnerabilities that can disclose NTLMv2 hashes, as they can be used to authenticate to systems and potentially move laterally within a network to access other systems,” Narang said.

The two other publicly disclosed weaknesses Microsoft patched this month are CVE-2024-49019, an elevation of privilege flaw in Active Directory Certificate Services (AD CS); and CVE-2024-49040, a spoofing vulnerability in Microsoft Exchange Server.

Ben McCarthy, lead cybersecurity engineer at Immersive Labs, called special attention to CVE-2024-43639, a remote code execution vulnerability in Windows Kerberos, the authentication protocol that is heavily used in Windows domain networks.

“This is one of the most threatening CVEs from this patch release,” McCarthy said. “Windows domains are used in the majority of enterprise networks, and by taking advantage of a cryptographic protocol vulnerability, an attacker can perform privileged acts on a remote machine within the network, potentially giving them eventual access to the domain controller, which is the goal for many attackers when attacking a domain.”

McCarthy also pointed to CVE-2024-43498, a remote code execution flaw in .NET and Visual Studio that could be used to install malware. This bug has earned a CVSS severity rating of 9.8 (10 is the worst).

Finally, at least 29 of the updates released today tackle memory-related security issues involving SQL server, each of which earned a threat score of 8.8. Any one of these bugs could be used to install malware if an authenticated user connects to a malicious or hacked SQL database server.

For a more detailed breakdown of today’s patches from Microsoft, check out the SANS Internet Storm Center’s list. For administrators in charge of managing larger Windows environments, it pays to keep an eye on Askwoody.com, which frequently points out when specific Microsoft updates are creating problems for a number of users.

As always, if you experience any problems applying any of these updates, consider dropping a note about it in the comments; chances are excellent that someone else reading here has experienced the same issue, and maybe even has found a solution.

Planet DebianPaul Tagliamonte: Complex for Whom?

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else – maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

Cryptogram Mapping License Plate Scanners in the US

DeFlock is a crowd-sourced project to map license plate scanners.

It only records the fixed scanners, of course. The mobile scanners on cars are not mapped.

Planet DebianSven Hoexter: fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

Planet DebianJames Bromberger: My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

Worse Than FailureRepresentative Line: How is an Array like a Banana?

Some time ago, poor Keith found himself working on an antique Classic ASP codebase. Classic ASP uses VBScript, which is like VisualBasic 6.0, but worse in most ways. That's not to say that VBScript code is automatically bad, but the language certainly doesn't help you write clean code.

In any case, the previous developer needed to make an 8 element array to store some data. Traditionally, in VBScript, you might declare it like so:

Dim params(8)

That's the easy, obvious way a normal developer might do it.

Keith's co-worker did this instead:

Dim params : params = Split(",,,,,,,", ",")

Yes, this creates an array using the Split function on a string of only commas. 7, to be exact. Which, when split, creates 8 empty substrings.

We make fun of stringly typed data a lot here, but this is an entirely new level of stringly typed initialization.

We can only hope that this code has finally been retired, but given that it was still in use well past the end-of-life for Classic ASP, it may continue to lurk out there, waiting for another hapless developer to stumble into its grasp.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsCrossover

Author: Majoki Most folks can pretty easily picture an amount doubling, and even envisioning something ten or a hundred times its current size or intensity. But our imaginations often fail miserably when faced with exponential growth. Unfortunately, this inability (or unwillingness) to comprehend (or confront) rapid proportional change threatens our long-term viability as a species. […]

The post Crossover appeared first on 365tomorrows.

Cryptogram Criminals Exploiting FBI Emergency Data Requests

I’ve been writing about the problem with lawful-access backdoors in encryption for decades now: that as soon as you create a mechanism for law enforcement to bypass encryption, the bad guys will use it too.

Turns out the same thing is true for non-technical backdoors:

The advisory said that the cybercriminals were successful in masquerading as law enforcement by using compromised police accounts to send emails to companies requesting user data. In some cases, the requests cited false threats, like claims of human trafficking and, in one case, that an individual would “suffer greatly or die” unless the company in question returns the requested information.

The FBI said the compromised access to law enforcement accounts allowed the hackers to generate legitimate-looking subpoenas that resulted in companies turning over usernames, emails, phone numbers, and other private information about their users.

LongNowSara Imari Walker

Sara Imari Walker

Sara Imari Walker leads one of the largest international theory groups in origins of life and astrobiology. Walker and her team's key areas of research are in developing new approaches to the problem of understanding universal features of life; those that might allow a general theory for solving the matter to life transition, detecting alien life and designing synthetic life. Applying assembly theory, a physics framework based on molecular complexity that Walker and her team have expanded, opens a new path to identify where the threshold lies where life arises from non-life, and to detect and understand the evolution of life on our planet and in the universe.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, October 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In October, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 15.0h (out of 87.0h assigned and 13.0h from previous period), thus carrying over 85.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 10.0h assigned).
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 4.0h (out of 0.0h assigned and 4.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 29.0h (out of 26.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 60.0h (out of 23.5h assigned and 36.5h from previous period).
  • Guilhem Moulin did 7.5h (out of 19.75h assigned and 0.25h from previous period), thus carrying over 12.5h to the next month.
  • Lee Garrett did 15.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 44.75h to the next month.
  • Lucas Kanashiro did 10.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. Sánchez did 9.75h (out of 24.0h assigned), thus carrying over 14.25h to the next month.
  • Santiago Ruano Rincón did 23.5h (out of 25.0h assigned), thus carrying over 1.5h to the next month.
  • Sean Whitton did 6.25h (out of 1.0h assigned and 5.25h from previous period).
  • Stefano Rivera did 1.0h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.0h to the next month.
  • Sylvain Beucler did 9.5h (out of 16.0h assigned and 44.0h from previous period), thus carrying over 50.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 10.5h (out of 12.0h assigned), thus carrying over 1.5h to the next month.

Evolution of the situation

In October, we have released 35 DLAs.

Some notable updates prepared in October include denial of service vulnerability fixes in nss, regression fixes in apache2, multiple fixes in php7.4, and new upstream releases of firefox-esr, openjdk-17, and opendk-11.

Additional contributions were made for the stable Debian 12 bookworm release by several LTS contributors. Arturo Borrero Gonzalez prepared a parallel update of nss, Bastien Roucariès prepared a parallel update of apache2, and Santiago Ruano Rincón prepared updates of activemq for both LTS and Debian stable.

LTS contributor Bastien Roucariès undertook a code audit of the cacti package and in the process discovered three new issues in node-dompurity, which were reported upstream and resulted in the assignment of three new CVEs.

As always, the LTS team continues to work towards improving the overall sustainability of the free software base upon which Debian LTS is built. We thank our many committed sponsors for their ongoing support.

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianGunnar Wolf: Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

Planet DebianVincent Bernat: Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

Update (2024-11)

This flake won’t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different. ↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

Worse Than FailureCodeSOD: Pay for this Later

Ross needed to write software to integrate with a credit card payment gateway. The one his company chose was relatively small, and only served a handful of countries- but it covered the markets they cared about and the transaction fees were cheap. They used XML for data interchange, and while they had no published schema document, they did have some handy-dandy sample code which let you parse their XML messages.

$response = curl_exec($ch);
$authecode = fetch_data($response, '<authCode>', '</authCode>');
$responsecode = fetch_data($response, '<responsecode>', '</responsecode>');
$retrunamount = fetch_data($response, '<returnamount>', '</returnamount>');
$trxnnumber = fetch_data($response, '<trxnnumber>', '</trxnnumber>');
$trxnstatus = fetch_data($response, '<trxnstatus>', '</trxnstatus>');
$trxnresponsemessage = fetch_data($response, '<trxnresponsemessage>', '</trxnresponsemessage>');

Well, this looks… worrying. At first glance, I wonder if we're going to have to kneel before Z̸̭͖͔͂̀ā̸̡͖͕͊l̴̜͕͋͌̕g̸͉̳͂͊ȯ̷͙͂̐. What exactly does fetch_data actually do?

function fetch_data($string, $start_tag, $end_tag)
{

  $position = stripos($string, $start_tag);
  $str = substr($string, $position);
  $str_second = substr($str, strlen($start_tag));
  $second_positon = stripos($str_second, $end_tag);
  $str_third = substr($str_second, 0, $second_positon);
  $fetch_data = trim($str_third);
  return $fetch_data;
}

Phew, no regular expressions, just… lots of substrings. This parses the XML document with no sense of the document's structure- it literally just searches for specific tags, grabs whatever is between them, and calls it done. Nested tags? Attributes? Self-closing tags? Forget about it. Since it doesn't enforce that your open and closing tags match, it also lets you grab arbitrary (and invalid) document fragments- fetch_data($response, "<fooTag>", "<barTag>"), for example.

And it's not like this needs to be implemented from scratch- PHP has built-in XML parsing classes. We could argue that by limiting ourselves to a subset of XML (which I can only hope this document does) and doing basic string parsing, we've built a much simpler approach, but I suspect that after doing a big pile of linear searches through the document, we're not really going to see any performance benefits from this version- and maintenance is going to be a nightmare, as it's so fragile and won't work for many very valid XML documents.

It's always amazing when TRWTF is neither PHP nor XML but… whatever this is.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

David BrinJoe & Mark do these now! My own post-mortem can wait.

Here I offer two time-critical suggestions, below.

So skip past my blowhard prelude!


Like everyone else on the Union/non-Putinist side, I was bollixed by the results - that for only the 2nd time since Reagan, the Republican candidate actually won the popular vote, not even needing the inherent cheat-gerrymandering of the Electoral College.

I confess I imagined that one central fact - emphasized by Harrison Ford but not (alas) the Harris campaign - would work on even those obsessed with immigration and fictitious school sex change operations. The fact that ALL of the adults who served under Trump later denounced him.*

Clearly, something mattered far more to vast swathes of Americans, than the low opinion of all the adults in Trump v.1.0 toward their jibbering boss. And no, it's WAS NOT racism/misogyny. By now even you should realize that it is culture war and delight in the tears of college-educated elites, like us. Like those 250+ adults from Trump v1.0.

Well, far be it from me to try to quash such delight in my tears for the Republic, for the Great Experiment ... and for Ukraine. Here they are guys. Drink up. But save some of the tears to bottle and send to Vlad.


                     * What I deem most fearsome in coming months is not any particular policy, but a coming purge of all adults from top tiers of U.S. government.


Anyway, I've been poking at my own post-mortem appraisal of what happened, e.g. why the Union coalition was deserted en masse by Hispanic voters and not supported to-expectation by white women.  I'll soon get to that posting, or several. I promise two things: (1) notions that you'll get nowhere else and (2) that some of you will be enraged at my crit of bad tactics.

But that can wait. Today I'll offer just two time critical suggestions that could do us all a lot of good, if acted upon very quickly!  

They won't be, of course. Still, maybe some AIs somewhere/sometime will note that I offered these. And maybe they will model "that coulda worked."

It's likely the best I can hope for. And yet... here goes...


== Joe, at long last and right now 

offer the clemency for truth deal! ==


Item #1: I've long asked for it. But now would be perfect. 

Joe Biden could offer amnesty/clemency and even pardons, in exchange for revelations vital to the Republic.  


"If you are a person of influence in the USA, and you've been under the thumb of foreign or domestic blackmailers, this is your one chance. **

"Step up and tell all! I promise I'll do everything in my power to lessen your legal penalties, in exchange for truths that could greatly benefit your country. Perhaps even shattering a cabal whose tentacles - some claim - have pervaded this town and the nation.

"I can't prevent pain and public disdain over whatever originally got you into your mess, or things done to please your blackmailers. But I can promise three things: some legal safety, plus privately-funded and bonded security, if requested...

"...plus also public praise for being among the first to step up and show some guts! For the sake of the nation... and your own redemption."


Sure, this would raise howls! Even though there's precedent in Nelson Mandela's Truth & Reconciliation process and similar programs in Argentina and Chile.

 Moreover, several Congress members have already attested publicly that such blackmail rings exist, pervading their own party!


"Why haven't I done this sooner? Because inevitably, it'd be seen as a political stunt. In our tense era, I'd be accused of election meddling.  Only now, admitting that the nation has decisively chosen Donald Trump and his party to govern, I can do this outside of politics, in order to give him a truly clean slate! 

"Let him - let us all - start fresh in January, knowing that the nation had this one chance to flush out the worst illness... aided by those who are eager to breathe free of threats and guilt, at long last....

"... remembering that all 'Heaven rejoices when a sinner repents and comes to the light.'"


Whatever your beliefs, I defy you to name any drawbacks. And let's be clear. Joe could do this. He could do it tomorrow. And the worst thing that he risks would be that nothing happens.

Even in that case, amid some mockery, he would still have raised a vitally needed topic. And at-best?

At best, there could be a whole lot of disinfection. At a time when it is exactly what's badly needed.


== What some billionaire could do ==

Another proposal I have made before, in Polemical Judo. This one seems worth doing, even in the present case, when Donald Trump has 26 more electoral votes than he needs - and hence has nothing to fear from defections, before the "Electoral College" votes, next month.

Why did I say "Electoral College" in quotes? Because in fact it has never, ever been a 'college' of elected delegates, originally meant to deliberate solemnly and choose the president of the United States after thorough discussion.  But that might change!

As I've said before - one zillionaire could change this.  Rent a mountaintop luxury hotel. Hire retired Secret Service agents for security and some highly-vetted chefs-de-cuisine. Maybe a string quartet of non-English speakers. For two weeks, the only others who may walk through the doors and stroll the grounds are registered electors.

They can come - or not - if they want. Dine and stroll and no one has any obligation to speak or listen. Or else - completely up to them - they might decide to convene the first actual Electoral College in the history of the Republic. Is there any -- and I mean any -- reason why this would not be legally and morally completely kosher?

Yes, I know. It will guarantee that the following election will see the parties vett their elector slates even more carefully for utter-utter loyalty. As if that isn't already true. 

So? In any case, the cost would be chickenfeed and the sheer coolness factor could be... well... diverting from our troubles.


== Other suggestions? ==

You know I got a million of em. And (alas!) so many were already in Polemical Judo

And already ignored. Because  the ideas are unconventional and cross many party clichés. Whatever. Poor Brin.

But these are two that will either be acted upon NEXT WEEK or else (99.999% odds) not at all.

So, next posting I'll dive into that post-mortem of the election. And yes, there will be perspectives you never heard or saw anywhere else. (Care to bet on that?) And some may make you go huh. And some may make you angry.

Good. Like Captain Kirk... you need your pain.


=====

=====


** I made my case about blackmail years ago here: Political Blackmail: The Hidden Danger to Public Servants.  And despite Madison Cawthorn and several other high Republicans testifying openly that it is utterly true - honey pots and 'orgies' and sophisticated KGB lompromat - apparently nothing has been done. Nor - apparently - will it be.

Still, there is a third thing I was gonna recommend here...

...that Biden promise sanctuary and a big financial prize for any KGB officer who defects, bringing over the blackmail files! Just making the offer, publicly, might make many people on this planet very, very nervous... and likely result in some orchestrated performances of group window diving in Moscow.

Well-well. One can fantacize it as a coming episode of the Mission Impossible series, at least. Call my agent


*** Several of you spoke of the threat to personal physical safety for the first few to step forward... until the wave of revelation turns the tables and sends blackmailers fleeing for their lives. While it's true that Joe B will no longer be in a position to offer US Government guarantees, allied governments can! Plus new identities etc. Anyway, isn't this fundamentally about heroism? Asking it - in exchange for redemption - from those who might leap at a chance for a path out of treason-hell?





365 TomorrowsThe End

Author: Julian Miles, Staff Writer Sources always emphasised the utility of wind-up devices after any sort of catastrophe. I used to be sceptical, but having now spent a couple of years surviving in the ruined urban wonderlands of southern England, I admit I was mostly wrong. When I hooked up with this group last year, […]

The post The End appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianReproducible Builds: Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

365 TomorrowsSynthetic Predicament

Author: M D Smith IV The synthetics of New World Robotics had reached a level of perfection so far past the clunky years that they cost the average middle-class family the equivalent of six years’ salary. Those who could afford one bought them on time, like a house after a down payment, and touted them […]

The post Synthetic Predicament appeared first on 365tomorrows.

Planet DebianThorsten Alteholz: My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

,

Planet DebianJonathan Dowland: Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

Krebs on SecurityFBI: Spike in Hacked Police Emails, Fake Subpoenas

The Federal Bureau of Investigation (FBI) is urging police departments and governments worldwide to beef up security around their email systems, citing a recent increase in cybercriminal services that use hacked police email accounts to send unauthorized subpoenas and customer data requests to U.S.-based technology companies.

In an alert (PDF) published this week, the FBI said it has seen an uptick in postings on criminal forums regarding the process of emergency data requests (EDRs) and the sale of email credentials stolen from police departments and government agencies.

“Cybercriminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes,” the FBI warned.

In the United States, when federal, state or local law enforcement agencies wish to obtain information about an account at a technology provider — such as the account’s email address, or what Internet addresses a specific cell phone account has used in the past — they must submit an official court-ordered warrant or subpoena.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted (eventually, and at least in part) as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

In some cases, a cybercriminal will offer to forge a court-approved subpoena and send that through a hacked police or government email account. But increasingly, thieves are relying on fake EDRs, which allow investigators to attest that people will be bodily harmed or killed unless a request for account data is granted expeditiously.

The trouble is, these EDRs largely bypass any official review and do not require the requester to supply any court-approved documents. Also, it is difficult for a company that receives one of these EDRs to immediately determine whether it is legitimate.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR — and potentially having someone’s blood on their hands — or possibly leaking a customer record to the wrong person.

Perhaps unsurprisingly, compliance with such requests tends to be extremely high. For example, in its most recent transparency report (PDF) Verizon said it received more than 127,000 law enforcement demands for customer data in the second half of 2023 — including more than 36,000 EDRs — and that the company provided records in response to approximately 90 percent of requests.

One English-speaking cybercriminal who goes by the nicknames “Pwnstar” and “Pwnipotent” has been selling fake EDR services on both Russian-language and English cybercrime forums. Their prices range from $1,000 to $3,000 per successful request, and they claim to control “gov emails from over 25 countries,” including Argentina, Bangladesh, Brazil, Bolivia, Dominican Republic, Hungary, India, Kenya, Jordan, Lebanon, Laos, Malaysia, Mexico, Morocco, Nigeria, Oman, Pakistan, Panama, Paraguay, Peru, Philippines, Tunisia, Turkey, United Arab Emirates (UAE), and Vietnam.

“I cannot 100% guarantee every order will go through,” Pwnstar explained. “This is social engineering at the highest level and there will be failed attempts at times. Don’t be discouraged. You can use escrow and I give full refund back if EDR doesn’t go through and you don’t receive your information.”

An ad from Pwnstar for fake EDR services.

A review of EDR vendors across many cybercrime forums shows that some fake EDR vendors sell the ability to send phony police requests to specific social media platforms, including forged court-approved documents. Others simply sell access to hacked government or police email accounts, and leave it up to the buyer to forge any needed documents.

“When you get account, it’s yours, your account, your liability,” reads an ad in October on BreachForums. “Unlimited Emergency Data Requests. Once Paid, the Logins are completely Yours. Reset as you please. You would need to Forge Documents to Successfully Emergency Data Request.”

Still other fake EDR service vendors claim to sell hacked or fraudulently created accounts on Kodex, a startup that aims to help tech companies do a better job screening out phony law enforcement data requests. Kodex is trying to tackle the problem of fake EDRs by working directly with the data providers to pool information about police or government officials submitting these requests, with an eye toward making it easier for everyone to spot an unauthorized EDR.

If police or government officials wish to request records regarding Coinbase customers, for example, they must first register an account on Kodexglobal.com. Kodex’s systems then assign that requestor a score or credit rating, wherein officials who have a long history of sending valid legal requests will have a higher rating than someone sending an EDR for the first time.

It is not uncommon to see fake EDR vendors claim the ability to send data requests through Kodex, with some even sharing redacted screenshots of police accounts at Kodex.

Matt Donahue is the former FBI agent who founded Kodex in 2021. Donahue said just because someone can use a legitimate police department or government email to create a Kodex account doesn’t mean that user will be able to send anything. Donahue said even if one customer gets a fake request, Kodex is able to prevent the same thing from happening to another.

Kodex told KrebsOnSecurity that over the past 12 months it has processed a total of 1,597 EDRs, and that 485 of those requests (~30 percent) failed a second-level verification. Kodex reports it has suspended nearly 4,000 law enforcement users in the past year, including:

-1,521 from the Asia-Pacific region;
-1,290 requests from Europe, the Middle East and Asia;
-460 from police departments and agencies in the United States;
-385 from entities in Latin America, and;
-285 from Brazil.

Donahue said 60 technology companies are now routing all law enforcement data requests through Kodex, including an increasing number of financial institutions and cryptocurrency platforms. He said one concern shared by recent prospective customers is that crooks are seeking to use phony law enforcement requests to freeze and in some cases seize funds in specific accounts.

“What’s being conflated [with EDRs] is anything that doesn’t involve a formal judge’s signature or legal process,” Donahue said. “That can include control over data, like an account freeze or preservation request.”

In a hypothetical example, a scammer uses a hacked government email account to request that a service provider place a hold on a specific bank or crypto account that is allegedly subject to a garnishment order, or party to crime that is globally sanctioned, such as terrorist financing or child exploitation.

A few days or weeks later, the same impersonator returns with a request to seize funds in the account, or to divert the funds to a custodial wallet supposedly controlled by government investigators.

“In terms of overall social engineering attacks, the more you have a relationship with someone the more they’re going to trust you,” Donahue said. “If you send them a freeze order, that’s a way to establish trust, because [the first time] they’re not asking for information. They’re just saying, ‘Hey can you do me a favor?’ And that makes the [recipient] feel valued.”

Echoing the FBI’s warning, Donahue said far too many police departments in the United States and other countries have poor account security hygiene, and often do not enforce basic account security precautions — such as requiring phishing-resistant multifactor authentication.

How are cybercriminals typically gaining access to police and government email accounts? Donahue said it’s still mostly email-based phishing, and credentials that are stolen by opportunistic malware infections and sold on the dark web. But as bad as things are internationally, he said, many law enforcement entities in the United States still have much room for improvement in account security.

“Unfortunately, a lot of this is phishing or malware campaigns,” Donahue said. “A lot of global police agencies don’t have stringent cybersecurity hygiene, but even U.S. dot-gov emails get hacked. Over the last nine months, I’ve reached out to CISA (the Cybersecurity and Infrastructure Security Agency) over a dozen times about .gov email addresses that were compromised and that CISA was unaware of.”

365 TomorrowsReality Check

Author: Melissa Kobrin Claire looked nervously at the coffin-shaped vat of green goo in front of her and tried to remember that this was one of the best days of her life. Her bachelorette party was going to be beyond her wildest dreams. And fantasies. The only reason Caleb was okay with it was because […]

The post Reality Check appeared first on 365tomorrows.

,

Planet DebianThomas Lange: Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

Worse Than FailureError'd: Relatively Speaking

Amateur physicist B.J. is going on vacation, but he likes to plan things right down to the zeptosecond. "Assume the flight accelerates at a constant speed for the first half of the flight, and decelerates at the same rate for the second half. 1) What speed does the plane need to reach to have that level of time dilation? 2) What is the distance between the airports?"

1

 

Contrarily, Eddie R. was tired of vacation so got a new job, but right away he's having second thoughts. "Doing my onboarding, but they seem to have trouble with the idea of optional."

0

 

"Forget UTF-8! Have you heard about the new, hot encoding standard for 2024?!" exclaimed Daniel , kvetching "Well, if you haven't then Gravity Forms co. is going to change your mind: URLEncode everything now! Specially if you need to display some diacritics on your website. Throw away the old, forgotten UTF-8. Be a cool guy, just use that urlencode!"

3

 

Immediately afterward, Daniel also sent us another good example, this time from Hetzner. He complains "Hetzner says the value is invalid. Of course they won't say what is or isn't allowed. It wasn't the slash character, it was... a character with diacritics! Hetzner is clearly using US-ASCII created in 1960's."

2

 

Finally this week, we pulled something out of the archive from Boule de Berlin who wrote "Telekom, the biggest German ISP, shows email address validation is hard. They use a regex that limits the TLD part of an email address to 4 chars." Old but timeless.

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianFreexian Collaborators: Debian Contributions: October’s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne

After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies.

At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there.

The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions

  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian’s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn’t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rincón continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Raphaël forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

Planet DebianReproducible Builds (diffoscope): diffoscope 283 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 283. This version includes the following changes:

[ Martin Abente Lahaye ]
* Fix crash when objdump is missing when checking .EFI files.

You find out more by visiting the project homepage.

,

Cryptogram AI Industry is Trying to Subvert the Definition of “Open Source AI”

The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.

And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one rebuttal to the definition.)

This is worth fighting for. We need a public AI option, and open source—real open source—is a necessary component of that.

But while open source should mean open source, there are some partially open models that need some sort of definition. There is a big research field of privacy-preserving, federated methods of ML model training and I think that is a good thing. And OSI has a point here:

Why do you allow the exclusion of some training data?

Because we want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information ­ like decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

How about we call this “open weights” and not open source?

Cryptogram Friday Squid Blogging: Squid-A-Rama in Des Moines

Squid-A-Rama will be in Des Moines at the end of the month.

Visitors will be able to dissect squid, explore fascinating facts about the species, and witness a live squid release conducted by local divers.

How are they doing a live squid release? Simple: this is Des Moines, Washington; not Des Moines, Iowa.

Blog moderation policy.

Cryptogram Prompt Injection Defenses Against LLM Cyberattacks

Interesting research: “Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks“:

Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense strategy tailored to counter LLM-driven cyberattacks. We introduce Mantis, a defensive framework that exploits LLMs’ susceptibility to adversarial inputs to undermine malicious operations. Upon detecting an automated cyberattack, Mantis plants carefully crafted inputs into system responses, leading the attacker’s LLM to disrupt their own operations (passive defense) or even compromise the attacker’s machine (active defense). By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker’s LLM, Mantis can autonomously hack back the attacker. In our experiments, Mantis consistently achieved over 95% effectiveness against automated LLM-driven attacks. To foster further research and collaboration, Mantis is available as an open-source tool: this https URL.

This isn’t the solution, of course. But this sort of thing could be part of a solution.

Planet DebianJonathan Dowland: John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

Worse Than FailureRepresentative Line: One More Parameter, Bro

Matt needed to add a new field to a form. This simple task was made complicated by the method used to save changes back to the database. Let's see if you can spot what the challenge was:

public int saveQualif(String docClass, String transcomId, String cptyCod, String tradeId, String originalDealId, String codeEvent, String multiDeal,
            String foNumber, String codeInstrfamily, String terminationDate, String premiumAmount, String premiumCurrency, String notionalAmount,
            String codeCurrency, String notionalAmount2, String codeCurrency2, String fixedRate, String payout, String maType, String maDate,
            String isdaZoneCode, String tradeDate, String externalReference, String entityCode, String investigationFileReference,
            String investigationFileStartDate, String productType, String effectiveDate, String expiryDate, String paymentDate, String settInstrucTyp,
            String opDirection, String pdfPassword, String extlSysCod, String extlDeaId, String agrDt) throws TechnicalException, DfException

That's 36 parameters right there. This function, internally, creates a data access object which takes just as many parameters in its constructor, and then does a check: if a field is non-null, it updates that field in the database, otherwise it doesn't.

Of course, every single one of those parameters is stringly typed, which makes it super fun. Tracking premiumAmount and terminationDate as strings is certainly never going to lead to problems. I especially like the pdfPassword being stored, which is clearly just the low-security password meant to be used for encrypting a transaction statement or similar: "the last 4 digits of your SSN" or whatever. So I guess it's okay that it's being stored in the clear in the database, but also I still hate it. Do better!

In any case, this function was called twice. Once from the form that Matt was editing, where every parameter was filled in. The second time, it was called like this:

int nbUpdates = incoming.saveQualif(docClass, null, null, null, null, null, multiDeal, null,
                null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null,
                null, null, null, null, null, null, null, null, null, null, null, null);

As tempted as Matt was to fix this method and break it up into multiple calls or change the parameters to a set of classes or anything better, he was too concerned about breaking something and spending a lot of time on something which was meant to be a small, fast task. So like everyone who'd come before him, he just slapped in another parameter, tested it, and called it a day.

Refactoring is a problem for tomorrow's developer.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsI Need A New Human

Author: Lynne M Curry I’m your Chatbot Partner. Do you know I exist? Don’t get upset—I know you’re not oblivious. But you never say anything. Not once, not even a passing, “Oh, I hadn’t thought of that” or “thanks.” Do you know I picked you? Maybe you think you just clicked “open AI Chatbot” and, […]

The post I Need A New Human appeared first on 365tomorrows.

,

Planet DebianBits from Debian: Bits from the DPL

Dear Debian community,

this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.

Ada Lovelace Day 2024

As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.

MiniDebConf Cambridge

This was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.

If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.

FTPmaster accepts MRs for DAK

At the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.

As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.

At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.

  • Accept+Bug report

Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:

Accept + Bug Report

This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.

  • Binary name changes - for instance if done to experimental not via new

When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.

Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.

  • Remove dependency tree

When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:

a) the DAK removal tooling to remove the entire dependency tree
   after prompting the bug report author for confirmation, or

b) the system to auto-generate corresponding bug reports for all
   packages in the dependency tree.

The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.

In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.

On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.

Contacting teams

My outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.

Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.

Interesting readings on our mailing lists

I consider the following threads on our mailing list some interesting reading and would like to add some comments.

Sensible languages for younger contributors

Though the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.

"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard

I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.

Concerns regarding the "Open Source AI Definition"

A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.

Kind regards Andreas.

Rondam RamblingsThe Bright Side of the Election Results

I'm writing this at 9AM Pacific standard time on November 6, the morning after the election.  Not all the dust has quite settled yet, but two things are clear: Donald Trump has won, and the Republicans have taken control of the Senate.  The House is still a toss-up, and it's still unclear whether Trump will win the popular vote, but the last time I looked at the numbers he had a pretty

Planet DebianDaniel Lange: Weird times ... or how the New York DEC decided the US presidential elections

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

Planet DebianJaldhar Vyas: Making America Great Again

Making America Great Again

Justice For Peanut

Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)

  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.

  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.

  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.

  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.

  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.

  • President Biden may have actually been a better candidate than Border Czar Harris.

365 TomorrowsThe Fall of Man

Author: Alastair Millar Prosperina Station’s marketing slogan, “No sun means more fun!”, didn’t do it justice: circling the wandering gas giant PSO J318.5-22, better known as Dis, it was the ultimate in literally non-stop nightlife, seasoned with a flexible approach to Terran laws. Newly graduated robot designer Max Wayne knew she was a decade or […]

The post The Fall of Man appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Uniquely Validated

There's the potential for endless installments of "programmers not understanding how UUIDs work." Frankly, I think the fact that we represent them as human readable strings is part of the problem; sure, it's readable, but conceals the fact that it's just a large integer.

Which brings us to this snippet, from Capybara James.

    if (!StringUtils.hasLength(uuid) || uuid.length() != 36) {
        throw new RequestParameterNotFoundException(ErrorCodeCostants.UUID_MANDATORY_OR_FORMAT);
    }

StringUtils.hasLength comes from the Spring library, and it's a simple "is not null or empty" check. So- we're testing to see if a string is null or empty, or isn't exactly 36 characters long. That tells us the input is bad, so we throw a RequestParameterNotFoundException, along with an error code.

So, as already pointed out, a UUID is just a large integer that we render as a 36 character string, and there are better ways to validate a UUID. But this also will accept any 36 character string- as long as you've got 36 characters, we'll call it a UUID. "This is valid, really valid, dumbass" is now a valid UUID.

With that in mind, I also like the bonus of it not distinguishing between whether or not the input was missing or invalid, because that'll make it real easy for users to understand why their input is getting rejected.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Cryptogram IoT Devices in Password-Spraying Botnet

Microsoft is warning Azure cloud users that a Chinese controlled botnet is engaging in “highly evasive” password spraying. Not sure about the “highly evasive” part; the techniques seem basically what you get in a distributed password-guessing attack:

“Any threat actor using the CovertNetwork-1658 infrastructure could conduct password spraying campaigns at a larger scale and greatly increase the likelihood of successful credential compromise and initial access to multiple organizations in a short amount of time,” Microsoft officials wrote. “This scale, combined with quick operational turnover of compromised credentials between CovertNetwork-1658 and Chinese threat actors, allows for the potential of account compromises across multiple sectors and geographic regions.”

Some of the characteristics that make detection difficult are:

  • The use of compromised SOHO IP addresses
  • The use of a rotating set of IP addresses at any given time. The threat actors had thousands of available IP addresses at their disposal. The average uptime for a CovertNetwork-1658 node is approximately 90 days.
  • The low-volume password spray process; for example, monitoring for multiple failed sign-in attempts from one IP address or to one account will not detect this activity.

,

Krebs on SecurityCanadian Man Arrested in Snowflake Data Extortions

A 25-year-old man in Ontario, Canada has been arrested for allegedly stealing data from and extorting more than 160 companies that used the cloud data service Snowflake.

Image: https://www.pomerium.com/blog/the-real-lessons-from-the-snowflake-breach

On October 30, Canadian authorities arrested Alexander Moucka, a.k.a. Connor Riley Moucka of Kitchener, Ontario, on a provisional arrest warrant from the United States. Bloomberg first reported Moucka’s alleged ties to the Snowflake hacks on Monday.

At the end of 2023, malicious hackers learned that many large companies had uploaded huge volumes of sensitive customer data to Snowflake accounts that were protected with little more than a username and password (no multi-factor authentication required). After scouring darknet markets for stolen Snowflake account credentials, the hackers began raiding the data storage repositories used by some of the world’s largest corporations.

Among those was AT&T, which disclosed in July that cybercriminals had stolen personal information and phone and text message records for roughly 110 million people — nearly all of its customers. Wired.com reported in July that AT&T paid a hacker $370,000 to delete stolen phone records.

A report on the extortion attacks from the incident response firm Mandiant notes that Snowflake victim companies were privately approached by the hackers, who demanded a ransom in exchange for a promise not to sell or leak the stolen data. All told, more than 160 Snowflake customers were relieved of data, including TicketMasterLending TreeAdvance Auto Parts and Neiman Marcus.

Moucka is alleged to have used the hacker handles Judische and Waifu, among many others. These monikers correspond to a prolific cybercriminal whose exploits were the subject of a recent story published here about the overlap between Western, English-speaking cybercriminals and extremist groups that harass and extort minors into harming themselves or others.

On May 2, 2024, Judische claimed on the fraud-focused Telegram channel Star Chat that they had hacked Santander Bank, one of the first known Snowflake victims. Judische would repeat that claim in Star Chat on May 13 — the day before Santander publicly disclosed a data breach — and would periodically blurt out the names of other Snowflake victims before their data even went up for sale on the cybercrime forums.

404 Media reports that at a court hearing in Ontario this morning, Moucka called in from a prison phone and said he was seeking legal aid to hire an attorney.

KrebsOnSecurity has learned that Moucka is currently named in multiple indictments issued by U.S. prosecutors and federal law enforcement agencies. However, it is unclear which specific charges the indictments contain, as all of those cases remain under seal.

TELECOM DOMINOES

Mandiant has attributed the Snowflake compromises to a group it calls “UNC5537,” with members based in North America and Turkey. Sources close to the investigation tell KrebsOnSecurity the UNC5537 member in Turkey is John Erin Binns, an elusive American man indicted by the U.S. Department of Justice (DOJ) for a 2021 breach at T-Mobile that exposed the personal information of at least 76.6 million customers.

In a statement on Moucka’s arrest, Mandiant said UNC5537 aka Alexander ‘Connor’ Moucka has proven to be one of the most consequential threat actors of 2024.

“In April 2024, UNC5537 launched a campaign, systematically compromising misconfigured SaaS instances across over a hundred organizations,” wrote Austin Larsen, Mandiant’s senior threat analyst. “The operation, which left organizations reeling from significant data loss and extortion attempts, highlighted the alarming scale of harm an individual can cause using off-the-shelf tools.”

Sources involved in the investigation said UNC5537 has focused on hacking into telecommunications companies around the world. Those sources told KrebsOnSecurity that Binns and Judische are suspected of stealing data from India’s largest state-run telecommunications firm Bharat Sanchar Nigam Ltd (BNSL), and that the duo even bragged about being able to intercept or divert phone calls and text messages for a large portion of the population of India.

Judische appears to have outsourced the sale of databases from victim companies who refuse to pay, delegating some of that work to a cybercriminal who uses the nickname Kiberphant0m on multiple forums. In late May 2024, Kiberphant0m began advertising the sale of hundreds of gigabytes of data stolen from BSNL.

“Information is worth several million dollars but I’m selling for pretty cheap,” Kiberphant0m wrote of the BSNL data in a post on the English-language cybercrime community Breach Forums. “Negotiate a deal in Telegram.”

Also in May 2024, Kiberphant0m took to the Russian-language hacking forum XSS to sell more than 250 gigabytes of data stolen from an unnamed mobile telecom provider in Asia, including a database of all active customers and software allowing the sending of text messages to all customers.

On September 3, 2024, Kiberphant0m posted a sales thread on XSS titled “Selling American Telecom Access (100B+ Revenue).” Kiberphant0m’s asking price of $200,000 was apparently too high because they reposted the sales thread on Breach Forums a month later, with a headline that more clearly explained the data was stolen from Verizon‘s “push-to-talk” (PTT) customers — primarily U.S. government agencies and first responders.

404Media reported recently that the breach does not appear to impact the main consumer Verizon network. Rather, the hackers broke into a third party provider and stole data on Verizon’s PTT systems, which are a separate product marketed towards public sector agencies, enterprises, and small businesses to communicate internally.

INTERVIEW WITH JUDISCHE

Investigators say Moucka shared a home in Kitchener with other tenants, but not his family. His mother was born in Chechnya, and he speaks Russian in addition to French and English. Moucka’s father died of a drug overdose at age 26, when the defendant was roughly five years old.

A person claiming to be Judische began communicating with this author more than three months ago on Signal after KrebsOnSecurity started asking around about hacker nicknames previously used by Judische over the years.

Judische admitted to stealing and ransoming data from Snowflake customers, but he said he’s not interested in selling the information, and that others have done this with some of the data sets he stole.

“I’m not really someone that sells data unless it’s crypto [databases] or credit cards because they’re the only thing I can find buyers for that actually have money for the data,” Judische told KrebsOnSecurity. “The rest is just ransom.”

Judische has sent this reporter dozens of unsolicited and often profane messages from several different Signal accounts, all of which claimed to be an anonymous tipster sharing different identifying details for Judische. This appears to have been an elaborate effort by Judische to “detrace” his movements online and muddy the waters about his identity.

Judische frequently claimed he had unparalleled “opsec” or operational security, a term that refers to the ability to compartmentalize and obfuscate one’s tracks online. In an effort to show he was one step ahead of investigators, Judische shared information indicating someone had given him a Mandiant researcher’s assessment of who and where they thought he was. Mandiant says those were discussion points shared with select reporters in advance of the researcher’s recent talk at the LabsCon security conference.

But in a conversation with KrebsOnSecurity on October 26, Judische acknowledged it was likely that the authorities were closing in on him, and said he would seriously answer certain questions about his personal life.

“They’re coming after me for sure,” he said.

In several previous conversations, Judische referenced suffering from an unspecified personality disorder, and when pressed said he has a condition called “schizotypal personality disorder” (STPD).

According to the Cleveland Clinic, schizotypal personality disorder is marked by a consistent pattern of intense discomfort with relationships and social interactions: “People with STPD have unusual thoughts, speech and behaviors, which usually hinder their ability to form and maintain relationships.”

Judische said he was prescribed medication for his psychological issues, but that he doesn’t take his meds. Which might explain why he never leaves his home.

“I never go outside,” Judische allowed. “I’ve never had a friend or true relationship not online nor in person. I see people as vehicles to achieve my ends no matter how friendly I may seem on the surface, which you can see by how fast I discard people who are loyal or [that] I’ve known a long time.”

Judische later admitted he doesn’t have an official STPD diagnosis from a physician, but said he knows that he exhibits all the signs of someone with this condition.

“I can’t actually get diagnosed with that either,” Judische shared. “Most countries put you on lists and restrict you from certain things if you have it.”

Asked whether he has always lived at his current residence, Judische replied that he had to leave his hometown for his own safety.

“I can’t live safely where I’m from without getting robbed or arrested,” he said, without offering more details.

A source familiar with the investigation said Moucka previously lived in Quebec, which he allegedly fled after being charged with harassing others on the social network Discord.

Judische claims to have made at least $4 million in his Snowflake extortions. Judische said he and others frequently targeted business process outsourcing (BPO) companies, staffing firms that handle customer service for a wide range of organizations. They also went after managed service providers (MSPs) that oversee IT support and security for multiple companies, he claimed.

“Snowflake isn’t even the biggest BPO/MSP multi-company dataset on our networks, but what’s been exfiltrated from them is well over 100TB,” Judische bragged. “Only ones that don’t pay get disclosed (unless they disclose it themselves). A lot of them don’t even do their SEC filing and just pay us to fuck off.”

INTEL SECRETS

The other half of UNC5537 — 24-year-old John Erin Binns — was arrested in Turkey in late May 2024, and currently resides in a Turkish prison. However, it is unclear if Binns faces any immediate threat of extradition to the United States, where he is currently wanted on criminal hacking charges tied to the 2021 breach at T-Mobile.

A person familiar with the investigation said Binns’s application for Turkish citizenship was inexplicably approved after his incarceration, leading to speculation that Binns may have bought his way out of a sticky legal situation.

Under the Turkish constitution, a Turkish citizen cannot be extradited to a foreign state. Turkey has been criticized for its “golden passport” program, which provides citizenship and sanctuary for anyone willing to pay several hundred thousand dollars.

This is an image of a passport that Binns shared in one of many unsolicited emails to KrebsOnSecurity since 2021. Binns never explained why he sent this in Feb. 2023.

Binns’s alleged hacker alter egos — “IRDev” and “IntelSecrets” — were at once feared and revered on several cybercrime-focused Telegram communities, because he was known to possess a powerful weapon: A massive botnet. From reviewing the Telegram channels Binns frequented, we can see that others in those communities — including Judische — heavily relied on Binns and his botnet for a variety of cybercriminal purposes.

The IntelSecrets nickname corresponds to an individual who has claimed responsibility for modifying the source code for the Mirai “Internet of Things” botnet to create a variant known as “Satori,” and supplying it to others who used it for criminal gain and were later caught and prosecuted.

Since 2020, Binns has filed a flood of lawsuits naming various federal law enforcement officers and agencies — including the FBI, the CIA, and the U.S. Special Operations Command (PDF), demanding that the government turn over information collected about him and seeking restitution for his alleged kidnapping at the hands of the CIA.

Binns claims he was kidnapped in Turkey and subjected to various forms of psychological and physical torture. According to Binns, the U.S. Central Intelligence Agency (CIA) falsely told their counterparts in Turkey that he was a supporter or member of the Islamic State (ISIS), a claim he says led to his detention and torture by the Turkish authorities.

However, in a 2020 lawsuit he filed against the CIA, Binns himself acknowledged having visited a previously ISIS-controlled area of Syria prior to moving to Turkey in 2017.

A segment of a lawsuit Binns filed in 2020 against the CIA, in which he alleges U.S. put him on a terror watch list after he traveled to Syria in 2017.

Sources familiar with the investigation told KrebsOnSecurity that Binns was so paranoid about possible surveillance on him by American and Turkish intelligence agencies that his erratic behavior and online communications actually brought about the very government snooping that he feared.

In several online chats in late 2023 on Discord, IRDev lamented being lured into a law enforcement sting operation after trying to buy a rocket launcher online. A person close to the investigation confirmed that at the beginning of 2023, IRDev began making earnest inquiries about how to purchase a Stinger, an American-made portable weapon that operates as an infrared surface-to-air missile.

Sources told KrebsOnSecurity Binns’ repeated efforts to purchase the projectile earned him multiple visits from the Turkish authorities, who were justifiably curious why he kept seeking to acquire such a powerful weapon.

WAIFU

A careful study of Judische’s postings on Telegram and Discord since 2019 shows this user is more widely known under the nickname “Waifu,” a moniker that corresponds to one of the more accomplished “SIM swappers” in the English-language cybercrime community over the years.

SIM swapping involves phishing, tricking or bribing mobile phone company employees for credentials needed to redirect a target’s mobile phone number to a device the attackers control — allowing thieves to intercept incoming text messages and phone calls.

Several SIM-swapping channels on Telegram maintain a frequently updated leaderboard of the 100 richest SIM-swappers, as well as the hacker handles associated with specific cybercrime groups (Waifu is ranked #24). That list has long included Waifu on a roster of hackers for a group that called itself “Beige.”

The term “Beige Group” came up in reporting on two stories published here in 2020. The first was in an August 2020 piece called Voice Phishers Targeting Corporate VPNs, which warned that the COVID-19 epidemic had brought a wave of targeted voice phishing attacks that tried to trick work-at-home employees into providing access to their employers’ networks. Frequent targets of the Beige group included employees at numerous top U.S. banks, ISPs, and mobile phone providers.

The second time Beige Group was mentioned by sources was in reporting on a breach at the domain registrar GoDaddy. In November 2020, intruders thought to be associated with the Beige Group tricked a GoDaddy employee into installing malicious software, and with that access they were able to redirect the web and email traffic for multiple cryptocurrency trading platforms. Other frequent targets of the Beige group included employees at numerous top U.S. banks, ISPs, and mobile phone providers.

Judische’s various Telegram identities have long claimed involvement in the 2020 GoDaddy breach, and he didn’t deny his alleged role when asked directly. Judische said he prefers voice phishing or “vishing” attacks that result in the target installing data-stealing malware, as opposed to tricking the user into entering their username, password and one-time code.

“Most of my ops involve malware [because] credential access burns too fast,” Judische explained.

CRACKDOWN ON HARM GROUPS?

The Telegram channels that the Judische/Waifu accounts frequented over the years show this user divided their time between posting in channels dedicated to financial cybercrime, and harassing and stalking others in harm communities like Leak Society and Court.

Both of these Telegram communities are known for victimizing children through coordinated online campaigns of extortion, doxing, swatting and harassment. People affiliated with harm groups like Court and Leak Society will often recruit new members by lurking on gaming platforms, social media sites and mobile applications that are popular with young people, including DiscordMinecraftRobloxSteamTelegram, and Twitch.

“This type of offence usually starts with a direct message through gaming platforms and can move to more private chatrooms on other virtual platforms, typically one with video enabled features, where the conversation quickly becomes sexualized or violent,” warns a recent alert from the Royal Canadian Mounted Police (RCMP) about the rise of sextortion groups on social media channels.

“One of the tactics being used by these actors is sextortion, however, they are not using it to extract money or for sexual gratification,” the RCMP continued. “Instead they use it to further manipulate and control victims to produce more harmful and violent content as part of their ideological objectives and radicalization pathway.”

Some of the largest such known groups include those that go by the names 764, CVLT, Kaskar, 7997888429926996555Slit Town545404NMK303, and H3ll.

On the various cybercrime-oriented channels Judische frequented, he often lied about his or others’ involvement in various breaches. But Judische also at times shared nuggets of truth about his past, particularly when discussing the early history and membership of specific Telegram- and Discord-based cybercrime and harm groups.

Judische claimed in multiple chats, including on Leak Society and Court, that they were an early member of the Atomwaffen Division (AWD), a white supremacy group whose members are suspected of having committed multiple murders in the U.S. since 2017.

In 2019, KrebsOnSecurity exposed how a loose-knit group of neo-Nazis, some of whom were affiliated with AWD, had doxed and/or swatted nearly three dozen journalists at a range of media publications. Swatting involves communicating a false police report of a bomb threat or hostage situation and tricking authorities into sending a heavily armed police response to a targeted address.

Judsiche also told a fellow denizen of Court that years ago he was active in an older harm community called “RapeLash,” a truly vile Discord server known for attracting Atomwaffen members. A 2018 retrospective on RapeLash posted to the now defunct neo-Nazi forum Fascist Forge explains that RapeLash was awash in gory, violent images and child pornography.

A Fascist Forge member named “Huddy” recalled that RapeLash was the third incarnation of an extremist community also known as “FashWave,” short for Fascist Wave.

“I have no real knowledge of what happened with the intermediary phase known as ‘FashWave 2.0,’ but FashWave 3.0 houses multiple known Satanists and other degenerates connected with AWD, one of which got arrested on possession of child pornography charges, last I heard,” Huddy shared.

In June 2024, a Mandiant employee told Bloomberg that UNC5537 members have made death threats against cybersecurity experts investigating the hackers, and that in one case the group used artificial intelligence to create fake nude photos of a researcher to harass them.

Allison Nixon is chief research officer with the New York-based cybersecurity firm Unit 221B. Nixon is among several researchers who have faced harassment and specific threats of physical violence from Judische.

Nixon said Judische is likely to argue in court that his self-described psychological disorder(s) should somehow excuse his long career in cybercrime and in harming others.

“They ran a misinformation campaign in a sloppy attempt to cover up the hacking campaign,” Nixon said of Judische. “Coverups are an acknowledgment of guilt, which will undermine a mental illness defense in court. We expect that violent hackers from the [cybercrime community] will experience increasingly harsh sentences as the crackdown continues.”

5:34 p.m. ET: Updated story to include a clarification from Mandiant. Corrected Moucka’s age.

365 TomorrowsBifurcation

Author: Majoki Her fingers stinging, Salda felt the chill and vastness of the late spring runoff as she sat upon a large stone in the middle of the river. High above her in the mountains, that same frigid water was a torrent muscling rock and soil relentlessly to carve deep channels. Channels that converged, then […]

The post Bifurcation appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Counting it All

Since it's election day in the US, many people are thinking about counting today. We frequently discuss counting here, and how to do it wrong, so let's look at some code from RK.

This code may not be counting votes, but whatever it's counting, we're not going to enjoy it:

case LogMode.Row_limit: // row limit excel = 65536 rows
    if (File.Exists(personalFolder + @"\" + fileName + ".CSV"))
    {
        using (StreamReader reader = new StreamReader(personalFolder + @"\" + fileName + ".CSV"))
        {
            countRows = reader.ReadToEnd().Split(new char[] { '\n' }).Length;
        }
    }

Now, this code is from a rather old application, originally released in 2007. So the comment about Excel's row limit really puts us in a moment in time- Excel 2007 raised the row limit to 1,000,000 rows. But older versions of Excel did cap out at 65,536. And it wasn't the case that everyone just up and switched to Excel 2007 when it came out- transitioning to the new Office file formats was a conversion which took years.

But we're not even reading an Excel file, we're reading a CSV.

I enjoy that we construct the name twice, because that's useful. But the real magic of this one is how we count the rows. Because while Excel can handle 65,536 rows at this time, I don't think this program is going to do a great job of it- because we read the entire file into memory with ReadToEnd, then Split on newlines, then count the length that way.

As you can imagine, in practice, this performed terribly on large files, of which there were many.

Unfortunately for RK, there's one rule about old, legacy code: don't touch it. So despite fixing this being a rather easy task, nobody is working on fixing it, because nobody wants to be the one who touched it last. Instead, management is promising to launch a greenfield replacement project any day now…

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

,

David BrinBalanced perspectives for our time - JUST in time?

Just before the consequential US election (I am optimistic we can prevail over Putinism), my previous posting offered a compiled packet of jpegs and quick bullets to use if you still have a marginally approachable, residually sane neighbor or relative who is 'sanity curious.' A truly comprehensive compendium! From the under-appreciated superb economy to proved dangers of pollution. From Ukraine to proof of Trump's religion-fakery. From saving science to ...

... the biggest single sentence of them all... "Almost every single honest adult who served under Trump now denounces him." Now numbering hundreds. 

And Harrison Ford emphasizing that point with eloquence.

Anyone able to ignore that central fact... that grownups who get to know Trump all despise him... truly is already a Kremlin boy.


== More sober reflections == 

Fareed Zakaria is by far the best pundit of our time - sharp, incisive, with well-balanced big-perspective. And yet, even he is myopic about what's going on.

On this occasion, he starts with The Economist's cover story that the U.S. economy is the "Envy of the World." 

Booming manufacturing and wages, record-low unemployment, the lowest inflation among industrial nations (now down to 2%), with democratic policies finally transferring money to the middle class, after 40 years of Supply Side ripoffs for the rich. 

The Wall Street Journal - of all capitalist and traditionally Republican outfits - calls the present economy 'superb at all levels' and 'remarkable,' with real growth in middle class wages and prosperity.

 And yet, many in the working classes now despise the Rooseveltean coalition that gave them everything, and even many black & hispanic males flock to Trump's macho ravings.

Zakaria is spot-on saying it's no longer about economics - not when good times can be taken for granted. Rather, it's social and cultural, propelled by visceral loathing of urban, college educated 'elites' by those who remain blue-collar, rural and macho. 

One result - amplified in media-masturbatory echo chambers and online Nuremberg Rallies - has been all-out war vs all fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.

Where Fareed gets it all wrong is in claiming this is something new!  

Elsewhere I point out the same cultural divide has erupted across all EIGHT different phases of the American civil/cultural war, since 1778. Moreover, farmers and blue collar workers, etc. have been traumatized for a century, in one crucial way! As their brightest sons and daughters rushed off from high school graduation to city/university lights...

... and then came back (if they ever come back at all) changed. 
It's been going on for 140 years. And the GI Bill after WWII accelerated it prodigiously.

I won't apologize for that... but I admit it's gotta hurt.

While sympathy is called-for, we need to recall that the recurring confederate fever is always puppetted by aristocrats - by King George, by slaver plantation lords, by gilded-age moguls, by inheritance brats and today's murder sheiks & Kremlin "ex"-commissars... and whenever the confederacy wins (as in 1830s, 1870s and 1920s in the United States and 1933 Germany) the results are stagnation and horror. And every "Union" victory (as in the 1770s, 1860s, 1940s, 1960s) is followed by both moral and palpable progress.

See also Fareed Zakaria's perspectives in his recently released book, Age of Revolutions: Progress and Backlash from 1600 to the Present.


== For this last week ==

Trump has learned a lesson from his time in office. Never trust any adults or women and men of accomplishment and stature. He has said clearly he will never have another Kelly, Mattis, Mullen, Milley... or even partisan hacks with some pride, like Barr, Pence, etc... allowed anywhere near the Oval Office. 

In fact, he wants many people in his potential administration who have criminal records and cannot get security clearances under present rules. He wants to have a private firm do background checks instead of the government and military security clearance process. 

This should give a bunch of corrupt or blackmail-vulnerable criminals access to and control over our most critical and sensitive secrets.

And anyone can doubt any longer that he is a Kremlin agent?


== A final note of wisdom ==

Only one method has ever been found that can often (not always) discover, interrogate and refute lies and liars or hallucinators.**

That method has been accountability via free-speech-empowered adversarial rivalry.  Almost all of our enlightenment institutions and accomplishments and freedoms rely upon it... Pericles and Adam Smith spoke of it and the U.S. Founders enshrined it...

...and the method is almost-never even remotely discussed in regards today's tsunamis of lies.

And even if things go super well in the Tuesday election, this basic truth must also shine light into the whole new problem/opportunity of Artificial Intelligence. (And I go into that elsewhere.) 

 It must... or we're still screwed.

---
** I openly invite adversarial refutation of this assertion.

------------------------------------------
------------------------------------------

Okay okay. You want prediction? I'll offer four scenarios:

1.     Harris and dems win big. They must, for the “steal” yammer-lies to fade to nothing, except for maybe a few McVeigh eruptions. (God bless the FBI undercover guys!) In this scenario, everyone but Putin soon realizes things are already pretty good in the US and West and getting better... and the many of our Republican neighbors – waking up from this insane trance – shake off confederatism and get back to loyally standing up for both America and enterprise. 


And perhaps the GOP will also shake away the heavily blackmail compromised portion of their upper castes and return to the pre-Hastert mission of negotiating sane conservative needs into a growing consensus.


2.     Harris squeaks in. We face 6 months of frantic Trumpian shrieks and Project 2025 ploys and desperate Kremlin plots and a tsunami of McVeighs.  (Again: God bless the FBI undercover guys!)  In this case, I will have a dozen ideas to present next week, to staunch the vile schemes of the Project 2025ers.


    In this case there will be confederate cries of "Secession!" over nothing real, as they had no real cause in 1861. We must answer "Okay fine this time. Off you go! Only we keep all military bases and especially we keep all of your blue cities (linking them with high speed rail), cities who get to secede from YOU!  Sell us beef and oil, till we make both obsolete! And you beg to be let back in.  Meanwhile, your brighter sons and daughters will still come over - with scholarships. So go in peace and God bless."


3.    Trump squeaks in and begins his reign of terror. We brace ourselves for the purge of all fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.  And within 6 months you will hear two words that I am speaking here for the 1st time: 


                    GENERAL STRIKE. 


    A legal and mammoth job action by those who actually know stuff and how to do stuff.  At which point then watch how redders realize how much they daily rely on our competence. And how quickly the oligarchs act to remove Trump, either through accelerating senility, or bribed retirement or... the Howard Beale scenario. At which point then Peter Thiel (briefly) owns America. It's Putin's dream outcome as the USA betrays Ukraine and Europe and the future... and tears itself apart. But no matter how painful, remember, we've recovered before. And we'll remember that you did this, Vlad and Peter. And those who empowered them.


    Oh, yes and this. Idiot believers in THE FOURTH TURNING will get their transformative 'crisis' that never had to happen and that they artificially caused (and we'll remember.) Above all, the Gen-Z 'hero generation' will know this. And you cultists will not like them, when they're mad.


    4. Trump landslide. Ain’t gonna happen. For one thing because Putin knows he won’t benefit if Trump is so empowered that he's freed from all puppet strings and blackmail threats. At which point Putin will suddenly realize he’s lost control - the way the German Junkers caste lords lost control in 1933, as portrayed at the end of CABARET. 

Still confused why Putin wouldn't want this? Watch Angela Lansbury’s chilling soliloquy near the end of THE MANCHURIAN CANDIDATE. This outcome is the one Putin should most fear. 

By comparison, Kamala would likely let Vlad live. But a fully empowered Trump will erase Putin,-- along with every other oligarch who ever commanded or extorted or humiliated him - like those depicted below. And the grease stains will smolder.


Again... here's your compiled ompendium of final ammo. To help us veer this back to victory for America, the planet, and the Union side in our recurring civil war... 

...followed by malice toward none and charity for all and a return to fraternal joy in being a light unto the world. 





David BrinMeme-images for your semi-sane and residually honorable MAGA

Swamped with patent disclosures, podcasts and the Great Big AI Panic of 2024. And just learned the H1N5 bird flu may be nastier soon! (😟check your supplies.) Also, I appear to be more optimistic than most... and most of you have voted already. 

Still, I gotta do what I can, offering you some final, concise leverage. Not for your hopeless MAGA-Putinist uncle. But maybe his worried wife, your residually-sane aunt.  

What leverage?  Why... punchy jpegs, of course! 


   == Ammo Images that might sway... ==


How can anyone still sway to the hypnotism...

...of a face drenched in makeup and dripping hair dye?   

But OK. Let's start with a simple question.  

             Who are his enemies and who are his friends? 

ONE sentence ought to settle everything:

                                                     

Two Defense Secretaries. Two chiefs of staff. His Attorney General. His National Security Advisor & Secretary of State. His domestic policy head. Chair of the Joint Chiefs. Two communications directors. His Vice President plus 250 more.

Make your MAGA see this jpeg! ==>


All of them were HIS choices. Whom he called "Great Guys!"... who are now denouncing him as a horror-calamity and lethal stooge of foreign enemies.

 

At minimum, he's a terrible judge of character! (Who fell 'in love' with Kim Jong Un.) 


But don't worry. In Trump II he's promised there will be no adults at all.


Examples: James Mattis, Marine General(ret), Trump’s 1st Defense Secretary: â€œDonald Trump is the first president in my life who didn't even pretend to try to unite the American people. He tries to divide us.â€�

Mark Esper, Trump’s 2nd Defense Secretary: â€œI believe he is a threat to democracy.â€�

John Kelly, Marine General(ret), Trump’s 2nd White House Chief of Staff: â€œHe often said‘Hitler did good things, too.’â€�

Ask Joint Chiefs Chair Mark Milley +Admiral McRaven +250 other officers!  

 Ask nearly all scientists.

Ask counter insurgency experts about “exâ€� commissar Putin’s long puppet strings.


But Don does have friends!

Here they are!==>

Have your MAGA zoom in and explain this.



== But... but isn't Trump the agent of God? ==

Such a Christian! Though if he ever willingly chose church over golf, no one has seen it. Here's one time he had to show up. And this one image says it all.


There's a hilarious and sad video of him mouthing-along while trying to recite the famous and well-known 23rd Psalm with worshippers and giving up after "He leadeth me.."  Too lazy even to memorize a couple of passages for show, he still after all these years, refuses to name a favorite passage. 
"It's too personal."  Riiiiight.

But then... some evangelicals can see all that! So they switch to the "Cyrus" argument. Like the King of the Medes who freed the people of Judah from Babylon, Trump is a 'righteous gentile!' A pagan who serves God by actions & deeds! 

(How? By destroying America and democracy and serving Moscow? But we'll get to that.)

Huh. Some servant of God. The most opposite-to-Jesus human any of us ever saw typifies every Deadly Sin! (Have your MAGA recite them aloud and NOT see Trump as the archetype!) 

Look, I don't credit the Book of Revelation. (Though all of you should know it! See the comic book version Apocamon; I mean it. You truly need to see what some of your neighbors wish and plan for you!) 

Still, there is a recurring character in that nightmare tome who DT resembles. I'm not talking about the Lamb of God. The character's name starts with "auntie" or "the Anti --" and Trump fits every described characteristic.  To a T.


== Is it the Economy, Stupid? ==

A problem with good times... folks soon take it for granted. Unemployment was the big issue. 
But after clawing our way out of the Covid Recession and Supply Chain inflation (nearly all economists blame Trump for worsening those) the 2021 stimulus bills worked!

Infrastructure - bridges etc. - are being fixed! Unemployment has stayed at the lowest level since the early 60s. We're in the best US economy since WWII.

Inflation? What? Ask your MAGA to step up NOW with wager stakes and bet which two nations have had the LOWEST inflation in the Industrial world for 3 years! 

(Hint, it's the US and Japan.)


 


Then why so grumpy?   Because Fox rants at fools to enjoy sanctimonious grumpiness! It's more fun than accepting the truth... that you are mesmerized by an echo chamber and Nuremberg Rally, with one central goal...

... to hate facts and fact professions & the damn, dry statistics.

But let's make it a wager. Assert the following and demand your MAGAs step up (like men) with cash stakes on the table:


EVERY Democratic Administration was more fiscally responsible regarding debt and deficits than EVERY Republican administration. 

In fact, most Democratic administrations had by far better economic outcomes across the board, for everyone except oligarchs and inheritance brats, who ALWAYS do vastly better under the GOP. Demand wagers!

But this is the biggie. The USA is undergoing the greatest boom in MANUFACTURING since the Second World War.  

That is unambiguous. Democrats did it.


== Climate Change ==

Nothing better illustrates the agenda of the GOP masters than fostering all-out war vs ALL fact using professions, from science, teaching, medicine, law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.  
"Hate the FBI!" is the most amazing and dangerous in the near term, but the anti-science campaign is core, over the long run.

Dig it. They're not attacking science in order to make $$ by delaying action on climate disaster. It's the reverse. They use climate denialism as one of many tools to attack all scientists and undermine trust in science

Why? Oligarchs can't restore 6000 years of insipid feudalism til they castrate all fact professions. But more on that elsewhere.

The crisis over our endangered EARTH is a vast subject! But this posting is about last minute, punchy capsules. So use this: Foxites flee in panic when you mention OCEAN ACIDIFICATION, which is unambiguously killing the seas our children will need and can only be caused by CO2 pollution. How they run from those two words.

Alas, instead of giving credit to the genius meteorologists who now predict hurricane paths within a few miles FIVE DAYS in advance, jibberers yammer: "They cause hurricanes!"

WHO is 'they?'  No, never mind that. You said it when earthquakes hit California. Recognize God's wrath when you feel it...


== Ukraine and NATO and Putin ==

Seriously? Who do you think Ronald Reagan would side with? The barely changed Kremlin and relabeled KGB, run by "ex" commissars who all grew up reciting Leninist catechisms? Who are now re-nationalizing all Russian businesses, crushing dissent and rebuilding the USSR?


Um, can anyone with a trace of fairness in their hearts not root for and support the attacked Ukrainian underdogs? And say "Damn Putin and his fellow tyrants!"

Dig it: NATO is now stronger than ever since 1946! Putin is fighting for his own murderous, richest-man-in-the-world life, desperate to get Trump into the Oval Office. It's his one hope.

LOOK at Trump's pals! At the expressions on their faces. Zoom in.


Can any of your neighbors who support Putin call anyone ELSE a 'commie'?


== Memory Lane ==


And wager NOW over the different death rates of the vaccinated vs. the un-vaccinated.  Death rates are simple. Even the reddest state suppies stats on that. And there's no ambiguity at all. Fox is trying to kill you.


== Immigration ==

But what about immigration? Well, surprise? I'll sorta half give you that one!

It's a vexing problem and the farthest left has not been helpful. They refuse to see how Putin and other tyrants have herded poor refugees into Europe and America, knowing it will push potitics in those countries to the right.  And it has even worked on U.S. Hispanics, who poll overwhelmingly wanting tighter borders.

Look, you may not like facing it, but Putin's strategy here has worked! And if you lefties want the power to do good, YOU are gonna have to prioritize. Compromise.

But this is not a Bill Maher screed aimed at woke-ist betrayals of the only coalition that can save the world. Later.

It is about far-worse MAGA lunacy. And what could be more lunatic than Trump ordering the GOP - last January - to torpedo the Immigration Bill they had just finished negotiating! 

That bill would have majorly increased the Border Patrol, plus internal tracking of refugee claimants and would have built more wall by now than the entire Trump presidency!

Now why would he do that? Simple. Going back generations before Trump & Putin took over the Republican Party, the GOP's master oligarchs loved cheap labor!

You just think about that now.

P.S. If a time comes when Republicans reject the madness and corruption that skyrocketed in the GOP since Dennis 'friend to boys" Hastert, and choose instead to return to political negotiation, moderate dems will race to work out incremental steps to mix pragmatic border security with helping refugees return safely to their improved home countries... with living by the American tradition (and biblical injunction) of kindness to legitimate newcomers.


== Again  - the most-effective single sentence is... ==


"ALL of the honest adults who served under Trump now denounce him."             

Earlier I showed former Trumpists who were admirable to some degree, and now denounce him. Now gaze at some more! Though some of these weren't quite as admirable as the 1st bunch. ==>

Still, these guys at least want the USA to survive! If only because it's where they keep their best stuff. Hypocrites some of them? I prefer the first set! Still, we need all the help...

On the other hand, THIS is a Republican we can all respect! (below):



== So what about fascism? ==

Seriously? This is an issue?

My Dad beat up f--ng Nazis in Chicago in the 1930s, when they marched both for Hitler and for the spectacularly misnamed "America First."  I know f--ng Nazis when I see em! And even if Trump isn't one by strict definition*...
   ... all the current American Nazis think he is! And they love him.


* But of course... he is.


== The endless lies ==

I notoriously demand WAGERS over all the lies! e.g. ANY randomly chosen 5 minute segment of any Trump speech! Put it before a randomly-chosen panel of low-political retired, senior military officers! 

I have a long list (dropped into comments*) of wager challenges. And not one MAGA in ten years has ever had the manly guts or confidence to step up with $$$ atty-escrowed stakes. Not one, ever. Weenie no-cojone cowards.

But let's start with Trump's endless promises to prove Obama was born in Kenya, or the mythical promise of a "Great Health Plan to replace Obamacare! I'll unveil it next week!" And then the next week and the next, for year after year after year... 

...and MAGAs never ask "Um, well?"


Or releasing "My great financials!" Or "I'll proudly release my tax returns when the IRS is done auditing!" Except the audits were a myth!  Or his college transcripts. Or the bone spur xrays. Or the fecal spew of lies during covid.

What we DO have is at least 20 copies of the Honolulu Advertiser from 1962 that folks have found in attics and garages all over Hawaii, with a birth announcement for Barack Obama. But any retraction or shame from ol' Two Scoops? Never.

There's a reason...

<==Declassify the "we fell in love!" notes from Kim! 

Then there's the biggest damn lie of them all...





== And heck, let's give you some more! ==

Do I have an ulterior motive, in dumping upon you this tsunami of jpegs? I mean other than hoping that a few of you will use them to help save the nation and world?

Hey I am over 70 and pushing 'clippings' at the young is what we farts do! ;-0

 But still... I am angry at MAGA crapheads dumping on Tim Walz, a 25 year veteran who trained hundreds of young troops with patience that made him beloved... as with 20 years of high school civics students... and the teams he coached to state championships... and so much more. (The Putin servants searched for ONE former student they could bribe to denounce Walz; even one.) 

A command sergeant major whose shoes you lying bastards aren't fit to...

   Like this good man who served and still does ==>    

(Calm David. You promised 'malice toward none..." Sure, after we save America in this 8th phase of the recurring civil war.)

In contrast to real men... we have this cringing, face-painted carnival barker... zoom in!

The colors are un-altered.






== Miscellaneous Adds! ==

Okay I'll conclude by dumping in a few more. Use whatever you like! MAKE the redeemable/reachable... if you know any... zoom in and see and then snap out of the trance! 





...and a few may even hear the call of Lincoln, Eisenhower and Teddy Roosevelt and even Reagan... realizing they must help rescue the Republican Party from treasonous madness. (LOOK below.)


And remember, Dems ALWAYS do better vs deficits and with almost every economic indicator...









Finally, Here's my biggest effort at supplying political tactics that might have ended this phase of the US Civil War decisively, in 2020, instead of merely getting a Gettysburg - vital(!) but requiring us to keep fighting the same monster.  May this year be Appomattox! Followed by "Malice toward none and charity for all..."

...and an America that leads a consensus-wiser world toward freedom, hope, and the stars.

Polemical Judo


.........  And in the words of Tiny Tim... God bless us, one and all...


================

================


Oh, I oughta give originator credit lines for every single one of these jpegs!  It's a modern problem. Almost none of the postings I took them from had credits, either!  This is one thing I expect AI to solve and soon.  May they be Machines of Loving Grace.


Planet DebianRavi Dwivedi: Asante Kenya for a Good Time

In September of this year, I visited Kenya to attend the State of the Map conference. I spent six nights in the capital Nairobi, two nights in Mombasa, and one night on a train. I was very happy with the visa process being smooth and quick. Furthermore, I stayed at the Nairobi Transit Hotel with other attendees, with Ibtehal from Bangladesh as my roommate. One of the memorable moments was the time I spent at a local coffee shop nearby. We used to go there at midnight, despite the grating in the shops suggesting such adventures were unsafe. Fortunately, nothing bad happened, and we were rewarded with a fun time with the locals.

The coffee shop Ibtehal and me used to visit during the midnight

Grating at a chemist shop in Mombasa, Kenya

The country lies on the equator, which might give the impression of extremely hot temperatures. However, Nairobi was on the cooler side (10–25 degrees Celsius), and I found myself needing a hoodie, which I bought the next day. It also served as a nice souvenir, as it had an outline of the African map printed on it.

I also bought a Safaricom SIM card for 100 shillings and recharged it with 1000 shillings for 8 GB internet with 5G speeds and 400 minutes talk time.

A visit to Nairobi’s Historic Cricket Ground

On this trip, I got a unique souvenir that can’t be purchased from the market—a cricket jersey worn in an ODI match by a player. The story goes as follows: I was roaming around the market with my friend Benson from Nairobi to buy a Kenyan cricket jersey for myself, but we couldn’t find any. So, Benson had the idea of visiting the Nairobi Gymkhana Club, which used to be Kenya’s main cricket ground. It has hosted some historic matches, including the 2003 World Cup match in which Kenya beat the mighty Sri Lankans and the record for the fastest ODI century by Shahid Afridi in just 37 balls in 1996.

Although entry to the club was exclusively for members, I was warmly welcomed by the staff. Upon reaching the cricket ground, I met some Indian players who played in Kenyan leagues, as well as Lucas Oluoch and Dominic Wesonga, who have represented Kenya in ODIs. When I expressed interest in getting a jersey, Dominic agreed to send me pictures of his jersey. I liked his jersey and collected it from him. I gave him 2000 shillings, an amount suggested by those Indian players.

Me with players at the Nairobi Gymkhana Club

Cricket pitch at the Nairobi Gymkhana Club

A view of the cricket ground inside the Nairobi Gymkhana Club

Scoreboard at the Nairobi Gymkhana cricket ground

Giraffe Center in Nairobi

Kenya is known for its safaris and has no shortage of national parks. In fact, Nairobi is the only capital in the world with a national park. I decided not to visit one, as most of them were expensive and offered multi-day tours, and I didn’t want to spend that much time in the wildlife.

Instead, I went to the Giraffe Center in Nairobi with Pragya and Rabina. The ticket cost 1500 Kenyan shillings (1000 Indian rupees). In Kenya, matatus - shared vans, usually decorated with portraits of famous people and play rap songs - are the most popular means of public transport. Reaching the Giraffe Center from our hotel required taking five matatus, which cost a total of 150 shillings, and a 2 km walk. The journey back was 90 shillings, suggesting that we didn’t find the most efficient route to get there. At the Giraffe Center, we fed giraffes and took photos.

A matatu with a Notorious BIG portrait.

Inside the Giraffe Center

Train ride from Nairobi to Mombasa

I took a train from Nairobi to Mombasa. The train is known as the “SGR Train,” where “SGR” refers to “Standard Gauge Railway.” The journey was around 500 km. M-Pesa was the only way to make payment for pre-booking the train ticket, and I didn’t have an M-Pesa account. Pragya’s friend Mary helped facilitate the payment. I booked a second-class ticket, which cost 1500 shillings (1000 Indian rupees).

The train was scheduled to depart from Nairobi at 08:00 hours in the morning and arrive in Mombasa at 14:00 hours. The security check at the station required scanning our bags and having them sniffed by sniffer dogs. I also fell victim to a scam by a security official who offered to help me get my ticket printed, only to later ask me to get him some coffee, which I politely declined.

Before boarding the train, I was treated to some stunning views at the Nairobi Terminus station. It was a seating train, but I wished it were a sleeper train, as I was sleep-deprived. The train was neat and clean, with good toilets. The train reached Mombasa on time at around 14:00 hours.

SGR train at Nairobi Terminus.

Interior of the SGR train

Arrival in Mombasa

Mombasa Terminus station.

Mombasa was a bit hotter than Nairobi, with temperatures reaching around 30 degrees Celsius. However, that’s not too hot for me, as I am used to higher temperatures in India. I had booked a hostel in the Old Town and was searching for a hitchhike from the Mombasa Terminus station. After trying for more than half an hour, I took a matatu that dropped me 3 km from my hostel for 200 shillings (140 Indian rupees). I tried to hitchhike again but couldn’t find a ride.

I think I know why I couldn’t get a ride in both the cases. In the first case, the Mombasa Terminus was in an isolated place, so most of the vehicles were taxis or matatus while any noncommercial cars were there to pick up friends and family. If the station were in the middle of the city, there would be many more car/truck drivers passing by, thus increasing my possibilities of getting a ride. In the second case, my hostel was at the end of the city, and nobody was going towards that side. In fact, many drivers told me they would love to give me a ride, but they were going in some other direction.

Finally, I took a tuktuk for 70 shillings to reach my hostel, Tulia Backpackers. It was 11 USD (1400 shillings) for one night. The balcony gave a nice view of the Indian Ocean. The rooms had fans, but there was no air conditioning. Each bed also had mosquito nets. The place was walking distance of the famous Fort Jesus. Mombasa has had more Islamic influence compared to Nairobi and also has many Hindu temples.

The balcony at Tulia Backpackers Hostel had a nice view of the ocean.

A room inside the hostel with fans and mosquito nets on the beds

Visiting White Sandy Beaches and Getting a Hitchhike

Visiting Nyali beach marked my first time ever at a white sand beach. It was like 10 km from the hostel. The next day, I visited Diani Beach, which was 30 km from the hostel. Going to Diani Beach required crossing a river, for which there’s a free ferry service every few minutes, followed by taking a matatu to Ukunda and then a tuk-tuk. The journey gave me a glimpse of the beautiful countryside of Kenya.

Nyali beach is a white sand beach

This is the ferry service for crossing the river.

During my return from Diani Beach to the hostel, I was successful in hitchhiking. However, it was only a 4 km ride and not sufficient to reach Ukunda, so I tried to get another ride. When a truck stopped for me, I asked for a ride to Ukunda. Later, I learned that they were going in the same direction as me, so I got off within walking distance from my hostel. The ride was around 30 km. I also learned the difference between a truck ride and a matatu or car ride. For instance, matatus and cars are much faster and cooler due to air conditioning, while trucks tend to be warmer because they lack it. Further, the truck was stopped at many checkpoints by the police for inspections as it carried goods, which is not the case with matatus. Anyways, it was a nice experience, and I am grateful for the ride. I had a nice conversation with the truck drivers about Indian movies and my experiences in Kenya.

Diani beach is a popular beach in Kenya. It is a white sand beach.

Selfie with truck drivers who gave me the free ride

Back to Nairobi

I took the SGR train from Mombasa back to Nairobi. This time I took the night train, which departs at 22:00 hours, reaching Nairobi at around 04:00 in the morning. I could not sleep comfortably since the train only had seater seats.

I had booked the Zarita Hotel in Nairobi and had already confirmed if they allowed early morning check-in. Usually, hotels have a fixed checkout time, say 11:00 in the morning, and you are not allowed to stay beyond that regardless of the time you checked in. But this hotel checked me in for 24 hours. Here, I paid in US dollars, and the cost was 12 USD.

Almost Got Stuck in Kenya

Two days before my scheduled flight from Nairobi back to India, I heard the news that the airports in Kenya were closed due to the strikes. Rabina and Pragya had their flight back to Nepal canceled that day, which left them stuck in Nairobi for two additional days. I called Sahil in India and found out during the conversation that the strike was called off in the evening. It was a big relief for me, and I was fortunate to be able to fly back to India without any changes to my plans.

Newspapers at a stand in Kenya covering news on the airport closure

Experience with locals

I had no problems communicating with Kenyans, as everyone I met knew English to an extent that could easily surpass that of big cities in India. Additionally, I learned a few words from Kenya’s most popular local language, Swahili, such as “Asante,” meaning “thank you,” “Jambo” for “hello,” and “Karibu” for “welcome.” Knowing a few words in the local language went a long way.

I am not sure what’s up with haggling in Kenya. It wasn’t easy to bring the price of souvenirs down. I bought a fridge magnet for 200 shillings, which was the quoted price. On the other hand, it was much easier to bargain with taxis/tuktuks/motorbikes.

I stayed at three hotels/hostels in Kenya. None of them had air conditioners. Two of the places were in Nairobi, and they didn’t even have fans in the rooms, while the one in Mombasa had only fans. All of them had good Wi-Fi, except Tulia where the internet overall was a bit shaky.

My experience with the hotel staff was great. For instance, we requested that the Nairobi Transit Hotel cancel the included breakfast in order to reduce the room costs, but later realized that it was not a good idea. The hotel allowed us to revert and even offered one of our missing breakfasts during dinner.

The staff at Tulia Backpackers in Mombasa facilitated the ticket payment for my train from Mombasa to Nairobi. One of the staff members also gave me a lift to the place where I could catch a matatu to Nyali Beach. They even added an extra tea bag to my tea when I requested it to be stronger.

Food

At the Nairobi Transit Hotel, a Spanish omelet with tea was served for breakfast. I noticed that Spanish omelette appeared on the menus of many restaurants, suggesting that it is popular in Kenya. This was my first time having this dish. The milk tea in Kenya, referred to by locals as “white tea,” is lighter than Indian tea (they don’t put a lot of tea leaves).

Spanish Omelette served in breakfast at Nairobi Transit Hotel

I also sampled ugali with eggs. In Mombasa, I visited an Indian restaurant called New Chetna and had a buffet thali there twice.

Ugali with eggs.

Tips for Exchanging Money

In Kenya, I exchanged my money at forex shops a couple of times. I received good exchange rates for bills larger than 50 USD. For instance, 1 USD on xe.com was 129 shillings, and I got 128.3 shillings per USD (a total of 12,830 shillings) for two 50 USD notes at an exchange in Nairobi, while 127 shillings, which was the highest rate at the banks. On the other hand, for smaller bills such as a one US dollar note, I would have got 125 shillings. A passport was the only document required for the exchange, and they also provided a receipt.

A good piece of advice for travelers is to keep 50 USD or larger bills for exchanging into the local currency while saving the smaller US dollar bills for accommodation, as many hotels and hostels accept payment in US dollars (in addition to Kenyan shillings).

Missed Malindi and Lamu

There were more places on my to-visit list in Kenya. But I simply didn’t have time to cover them, as I don’t like rushing through places, especially in a foreign country where there is a chance of me underestimating the amount of time it takes during transit. I would have liked to visit at least one of Kilifi, Watamu or Malindi beaches. Further, Lamu seemed like a unique place to visit as it has no cars or motorized transport; the only options for transport are boats and donkeys.

That’s it for now. Meet you in the next one :)

Planet DebianSven Hoexter: Google CloudDNS HTTPS Records with ipv6hint

I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:

resource "google_dns_record_set" "testv6" {
    name         = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type         = "HTTPS"
    ttl          = 3600
    rrdatas      = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
}

This results in a permanent diff because the Google CloudDNS API seems to parse the record content, and stores the ipv6hint expanded (removing the :: notation) and in all lowercase as 2001:db8:0:0:0:0:0:1. Thus to fix the permanent diff we've to use it like this:

resource "google_dns_record_set" "testv6" {
    name = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type = "HTTPS"
    ttl = 3600
    rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
}

Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.

Worse Than FailureCodeSOD: A Matter of Understanding

For years, Victoria had a co-worker who "programmed by Google Search"; they didn't understand how anything worked, they simply plugged their problem into Google search and then copy/pasted and edited until they got code that worked. For this developer, I'm sure ChatGPT has been a godsend, but this code predates its wide use. It's pure "Googlesauce".

    StringBuffer stringBuffer = new StringBuffer();
    stringBuffer.append("SELECT * FROM TABLE1 WHERE COLUMN1 = 1 WITH UR");

    String sqlStr = stringBuffer.toString();
    ps = getConnection().prepareStatement(sqlStr);

    ps.setInt(1, code);

    rs = ps.executeQuery();

    while (rs.next())
    {
      count++;
    }

The core of this WTF isn't anything special- instead of running a SELECT COUNT they run a SELECT and then loop over the results to get the count. But it's all the little details in here which make it fun.

They start by using a StringBuffer to construct their query- not a horrible plan when the query is long, but this is just a single, simple, one-line query. The query contains a WITH clause, but it's in the wrong spot. Then they prepareStatement it, which does nothing, since this query doesn't contain any parameters (and also, isn't syntactically valid). Once it's prepared, they set the non-existent parameter 1 to a value- this operation will throw an exception because there are no parameters in the query.

Finally, they loop across the results to count.

The real WTF is that this code ended up in the code base, somehow. The developer said, "Yes, this seems good, I'll check in this non-functional blob that I definitely don't understand," and then there were no protections in place to keep that from happening. Now it falls to more competent developers, like Victoria, to clean up after this co-worker.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Noghath Watches

Author: Julian Miles, Staff Writer The screen turns to flickering white lines behind a ‘Connecting…’ prompt. I find myself smiling and look up at the night sky. What do the natives call that constellation? Sarg something. Sarga Nol? Bigger… ‘Sarghalor Noghath’! Yes. Conceptual translation gives us ‘The noghath watches’. Neither the indigens nor us have […]

The post The Noghath Watches appeared first on 365tomorrows.

Cryptogram Sophos Versus the Chinese Hackers

Really interesting story of Sophos’s five-year war against Chinese hackers.

The post Sophos Versus the Chinese Hackers appeared first on Schneier on Security.

,

Rondam RamblingsWhat scares me about a second Trump administration

As long as I'm getting things on the record (while I still can without too much fear of reprisal) I want to endorse a video by Legal Eagle that lays out the case against voting for Donald Trump in 18 minutes of some of the best video commentary I've ever seen.  It's well worth watching, and encouraging others to watch, but just in case you don't want to invest the time and would rather read,

Planet DebianSteinar H. Gunderson: Ultimate rules as a service

Since WFDF changed their ultimate rules web site to be less-than-ideal (in the name of putting everything into Wordpress…), I made my own, at urules.org. It was a fun journey; I've never fiddled with PWAs before, and I was a bit surprised how low-level it all was. I assumed that since my page is just a bunch of HTML files and ~100 lines of JS, I could just bundle that up—but no, that is something they expect a framework to do for you.

The only primitive you get is seemingly that you can fire up your own background service worker (JS running in its own, locked-down context) and that gets to peek at every HTTP request done and possibly intercept it. So you can use a Web Cache (seemingly a separate concept from web local storage?), insert stuff into that, and then query it to intercept requests. It doesn't feel very elegant, perhaps?

It is a bit neat that I can use this to make my own bundling, though. All the pages and images (painfully converted to SVG to save space and re-flow for mobile screens, mostly by simply drawing over bitmaps by hand in Inkscape) are stuck into a JSON dictionary, compressed using the slowest compressor I could find and then downloaded as a single 159 kB bundle. It makes the site actually sort of weird to navigate; since it pretty quickly downloads the bundle in the background, everything goes offline and the speed of loading new pages just feels… off somehow. As if it's not a Serious Web Page if there's no load time.

Of course, this also means that I couldn't cache PNGs, because have you ever tried to have non-UTF-8 data in a JSON sent through N layers of JavaScript? :-)

Planet DebianGuido Günther: Free Software Activities October 2024

Another short status update of what happened on my side last month. Besides a phosh bugfix release improving text input and selection was a prevalent pattern again resulting in improvements in the compositor, the OSK and some apps.

phosh

  • Install gir (MR). Needed for e.g. Debian to properly package the Rust bindings.
  • Try harder to find an app icon when showing notifications (MR)
  • Add a simple Pomodoro timer plugin (MR)
  • Small screenshot manager fixes (MR)
  • Tweak portals configuration (MR)
  • Consistent focus style on lock screen and settings (MR). Improves the visual appearance as the dotted focus frame doesn't match our otherwise colored focus frames
  • Don't focus buttons in settings (MR). Improves the visual appearance as attention isn't drawn to the button focus.
  • Close Phosh's settings when activating a Settings panel (MR)

phoc

  • Improve cursor and cursor theme handling, hide mouse pointer by default (MR)
  • Don't submit empty preedit (MR)
  • Fix flickering selection bubbles in GTK4's text input fields (MR)
  • Backport two more fixes and release 0.41.1 (MR)

phosh-mobile-settings

  • Allow to select default text completer (MR, MR)
  • Don't crash when we fail to load a pref plugin (MR)

libphosh-rs

  • Update with current gir and allow to use status pages (MR)
  • Expose screenshot manager and build without warnings (MR). (Improved further by a follow up MR from Sam)
  • Fix clippy warnings and add clippy to CI (MR)

phosh-osk-stub

  • presage: Always set predictors (MR). Avoids surprises with unwanted predictors.
  • Install completer information (MR)
  • Handle overlapping touch events (MR). This should improve fast typing.
  • Allow plain ctrl and alt in the shortcuts bar (MR
  • Use Adwaita background color to make the OSK look more integrated (MR)
  • Use StyleManager to support accent colors (MR)
  • Fix emoji section selection in RTL locales (MR)
  • Don't submit empty preedit (MR). Helps to better preserve text selections.

phosh-osk-data

  • Add scripts to build word corpus from Wikipedia data (MR) See here for the data.

xdg-desktop-portal-phosh

  • Release 0.42~rc1 (MR)
  • Fix HighContrast (MR)

Debian

  • Collect some of the QCom workarounds in a package (MR). This is not meant to go into Debian proper but it's nicer than doing all the mods by hand and forgetting which files were modified.
  • q6voiced: Fix service configuration (MR)
  • chatty: Enable clock test again (MR), and then unbreak translations (MR)
  • phosh: Ship gir for libphosh-rs (MR)
  • phoc: Backport input method related fix (MR)
  • Upload initial package of phosh-osk-data: Status in NEW
  • Upload initial package of xdg-desktop-portal-pohsh: Status in NEW
  • Backport phosh-osk-stub abbrev fix (MR
  • phoc: Update to 0.42.1 (MR
  • mobile-tweaks: Enable zram on Librem 5 and PP (MR)

ModemManager

  • Some further work on the Cell Broadcast to address comments MR)

Calls

  • Further improve daemon mode (MR) (mentioned last month already but got even simpler)

GTK

  • Handle Gtk{H,V}Separator when migrating UI files to GTK4 (MR)

feedbackd

  • Modernize README a bit (MR)

Chatty

  • Use special event for SMS (MR)
  • Another QoL fix when using OSK (MR)
  • Fix printing time diffs on 32bit architectures (MR)

libcmatrix

  • Use endpoints for authenticated media (MR). Needed to support v1.11 servers.

phosh-ev

  • Switch to GNOME 47 runtime (MR)

git-buildpackage

  • Don't use deprecated pkg-resources (MR)

Unified push specification

  • Expand on DBus activation a bit (MR)

swipeGuess

  • Small build improvement and mention phosh-osk-stub (Commit)

wlr-clients

  • Fix -o option and add help output (MR)

iotas (Note taking app)

  • Don't take focus with header bar buttons (MR). Makes typing faster (as the OSK won't hide) and thus using the header bar easier

Flare (Signal app)

  • Don't take focus when sending messages, adding emojis or attachments (MR). Makes typing faster (as the OSK won't hide) and thus using those buttons easier

xdg-desktop-portal

  • Use categories that work for both xdg-spec and the portal (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is fairly incomplete, hope to improve on this in the upcoming months:

  • phosh-tour: add first login mode (MR)
  • phosh: Animate swipe closing notifications (MR)
  • iio-sensor-proxy: Report correct value on claim (MR)
  • iio-sensor-proxy: face-{up,down} (MR)
  • phosh-mobile-settings: Squeekboad scaling (MR)
  • libcmatrix: Misc cleanups/fixes (MR)
  • phosh: Notification separator improvements (MR
  • phosh: Accent colors (MR

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

Planet DebianJunichi Uekawa: Doing more swimming in everyday life for the past few months.

Doing more swimming in everyday life for the past few months. Seems like I am keeping that up.

365 TomorrowsHere Be Dragons

Author: Beck Dacus One half of the sky brimmed with stars, the Sun at one light-week’s distance barely outshining the rest. The other half was utterly dark, as if the universe ended at a sheer cliff. As I approached the blackness, detail started to emerge, my headlamp casting shadows on icy gravel the color of […]

The post Here Be Dragons appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: Rcpp 1.0.13-1 on CRAN: Hot Fix

rcpp logo

A hot-fix release 1.0.13-1, consisting of two small PRs relative to the last regular CRAN release 1.0.13, just arrived on CRAN. When we prepared 1.0.13, we included a change related to the ‘tightening’ of the C API of R itself. Sadly, we pinned an expected change to ‘comes with next (minor) release 4.4.2’ rather than now ‘next (normal aka major) release 4.5.0’. And now that R 4.4.2 is out (as of two days ago) we accidentally broke building against the header file with that check. Whoops. Bugs happen, and we are truly sorry—but this is now addressed in 1.0.13-1.

The normal (bi-annual) release cycle will resume with 1.0.14 slated for January. As you can see from the NEWS file of the development branch, we have a number of changes coming. You can safely access that release candidate version, either off the default branch at github or via r-universe artifacts.

The list below details all changes, as usual. The only other change concerns the now-mandatory use of Authors@R.

Changes in Rcpp release version 1.0.13-1 (2024-11-01)

  • Changes in Rcpp API:

    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Deployment:

    • Authors@R is now used in DESCRIPTION as mandated by CRAN

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRussell Coker: More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I’m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what’s the best resolution to have on a laptop if price isn’t an issue, but it’s at least 1440p for a 14″ display, that’s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term “Retina Display” to mean something in the range of 250DPI to 300DPI, so my current laptop is below “Retina” while the most expensive new Thinkpads are above it.

I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32″ 5K display, currently they all seem to be 27″ and I’ve found that even for 4K resolution 32″ is better than 27″.

With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it’s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn’t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :

# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3

[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"

[Install]
WantedBy=multi-user.target

Now it works fine, for stylus at least. I still get kernel error messages like the following which don’t seem to cause problems:

wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.

When it wasn’t working I got the above but also kernel error messages like:

wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events

This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I’ve configured KDE to suspend when the lid is closed and there’s no monitor connected.

Planet DebianRussell Coker: Moving Between Devices

I previously wrote about the possibility of transferring work between devices as an alternative to “convergence” (using a phone or tablet as a desktop) [1]. This idea has been implemented in some commercial products already.

MrWhosTheBoss made a good YouTube video reviewing recent Huawei products [2]. At 2:50 in that video he shows how you can link a phone and tablet, control one from the other, drag and drop of running apps and files between phone and tablet, mirror the screen between devices, etc. He describes playing a video on one device and having it appear on the other, I hope that it actually launches a new instance of the player app as the Google Chromecast failed in the market due to remote display being laggy. At 7:30 in that video he starts talking about the features that are available when you have multiple Huawei devices, starting with the ability to move a Bluetooth pairing for earphones to a different device.

At 16:25 he shows what Huawei is doing to get apps going including allowing apk files to be downloaded and creating what they call “Quick Apps” which are instances of a web browser configured to just use one web site and make it look like a discrete app, we need something like this for FOSS phone distributions – does anyone know of a browser that’s good for it?

Another thing that we need is to have an easy way of transferring open web pages between systems. Chrome allows sending pages between systems but it’s proprietary, limited to Chrome only, and also takes an unreasonable amount of time. KDEConnect allows sharing clipboard contents which can be used to send URLs that can then be pasted into a browser, but the process of copy URL, send via KDEConnect, and paste into other device is unreasonably slow. The design of Chrome with a “Send to your devices” menu option from the tab bar is OK. But ideally we need a “Send to device” for all tabs of a window as well, we need it to run from free software and support using your own server not someone else’s server (AKA “the cloud”). Some of the KDEConnect functionality but using a server rather than direct connection over the same Wifi network (or LAN if bridged to Wifi) would be good.

What else do we need?

365 TomorrowsBetter Left Undead

Author: J. Scott King “Can he continue?” A familiar voice, distant, urgent. And nearer, “The Seconds are conferring, Captain.” Then, more urgently, “Come no closer, sir! Resseaux, control your man!” A gruff, mumbled reply I can’t make out. “I’ll have him done!” That first fellow again… Captain Eddings. Right. Yes, that’s the one. Never liked […]

The post Better Left Undead appeared first on 365tomorrows.

Planet DebianRussell Coker: What is a Workstation?

I recently had someone describe a Mac Mini as a “workstation”, which I strongly disagree with. The Wikipedia page for Workstation [1] says that it’s a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS.

The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for “scientific or technical use”. A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation.

The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I’m currently using was released in 2021 and has 256G of RAM and has 4 * 3.5″ SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5″ SATA drive bays and 2*3.5″ SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5″ SAS bays.

In CPU and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a CPU less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G.

Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that.

I think that a usable definition of a “workstation” is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that’s the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn’t a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation.

My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week.

If 32G of non-ECC RAM is considered enough for a “workstation” then you could get an Android phone that counts as a workstation – and it will probably cost less than a Mac Mini.

,

Krebs on SecurityBooking.com Phishers May Leave You With Reservations

A number of cybercriminal innovations are making it easier for scammers to cash in on your upcoming travel plans. This story examines a recent spear-phishing campaign that ensued when a California hotel had its booking.com credentials stolen. We’ll also explore an array of cybercrime services aimed at phishers who target hotels that rely on the world’s most visited travel website.

According to the market share website statista.com, booking.com is by far the Internet’s busiest travel service, with nearly 550 million visits in September. KrebsOnSecurity last week heard from a reader whose close friend received a targeted phishing message within the Booking mobile app just minutes after making a reservation at a California hotel.

The missive bore the name of the hotel and referenced details from their reservation, claiming that booking.com’s anti-fraud system required additional information about the customer before the reservation could be finalized.

The phishing message our reader’s friend received after making a reservation at booking.com in late October.

In an email to KrebsOnSecurity, booking.com confirmed one of its partners had suffered a security incident that allowed unauthorized access to customer booking information.

“Our security teams are currently investigating the incident you mentioned and can confirm that it was indeed a phishing attack targeting one of our accommodation partners, which unfortunately is not a new situation and quite common across industries,” booking.com replied. “Importantly, we want to clarify that there has been no compromise of Booking.com’s internal systems.”

The phony booking.com website generated by visiting the link in the text message.

Booking.com said it now requires 2FA, which forces partners to provide a one-time passcode from a mobile authentication app (Pulse) in addition to a username and password.

“2FA is required and enforced, including for partners to access payment details from customers securely,” a booking.com spokesperson wrote. “That’s why the cybercriminals follow-up with messages to try and get customers to make payments outside of our platform.”

“That said, the phishing attacks stem from partners’ machines being compromised with malware, which has enabled them to also gain access to the partners’ accounts and to send the messages that your reader has flagged,” they continued.

It’s unclear, however, if the company’s 2FA requirement is enforced for all or just newer partners. Booking.com did not respond to questions about that, and its current account security advice urges customers to enable 2FA.

A scan of social media networks showed this is not an uncommon scam.

In November 2023, the security firm SecureWorks detailed how scammers targeted booking.com hospitality partners with data-stealing malware. SecureWorks said these attacks had been going on since at least March 2023.

“The hotel did not enable multi-factor authentication (MFA) on its Booking.com access, so logging into the account with the stolen credentials was easy,” SecureWorks said of the booking.com partner it investigated.

In June 2024, booking.com told the BBC that phishing attacks targeting travelers had increased 900 percent, and that thieves taking advantage of new artificial intelligence (AI) tools were the primary driver of this trend.

Booking.com told the BCC the company had started using AI to fight AI-based phishing attacks. Booking.com’s statement said their investments in that arena “blocked 85 million fraudulent reservations over more than 1.5 million phishing attempts in 2023.”

The domain name in the phony booking.com website sent to our reader’s friend — guestssecureverification[.]com — was registered to the email address ilotirabec207@gmail.com. According to DomainTools.com, this email address was used to register more than 700 other phishing domains in the past month alone.

Many of the 700+ domains appear to target hospitality companies, including platforms like booking.com and Airbnb. Others seem crafted to phish users of Shopify, Steam, and a variety of financial platforms. A full, defanged list of domains is available here.

A cursory review of recent posts across dozens of cybercrime forums monitored by the security firm Intel 471 shows there is a great demand for compromised booking.com accounts belonging to hotels and other partners.

One post last month on the Russian-language hacking forum BHF offered up to $5,000 for each hotel account. This seller claims to help people monetize hacked booking.com partners, apparently by using the stolen credentials to set up fraudulent listings.

A service advertised on the English-language crime community BreachForums in October courts phishers who may need help with certain aspects of their phishing campaigns targeting booking.com partners. Those include more than two million hotel email addresses, and services designed to help phishers organize large volumes of phished records. Customers can interact with the service via an automated Telegram bot.

Some cybercriminals appear to have used compromised booking.com accounts to power their own travel agencies catering to fellow scammers, with up to 50 percent discounts on hotel reservations through booking.com. Others are selling ready-to-use “config” files designed to make it simple to conduct automated login attempts against booking.com administrator accounts.

SecureWorks found the phishers targeting booking.com partner hotels used malware to steal credentials. But today’s thieves can just as easily just visit crime bazaars online and purchase stolen credentials to cloud services that do not enforce 2FA for all accounts.

That is exactly what transpired over the past year with many customers of the cloud data storage giant Snowflake. In late 2023, cybercriminals figured out that while tons of companies had stashed enormous amounts of customer data at Snowflake, many of those customer accounts were not protected by 2FA.

Snowflake responded by making 2FA mandatory for all new customers. But that change came only after thieves used stolen credentials to siphon data from 160 companies — including AT&T, Lending Tree and TicketMaster.

Planet DebianColin Watson: Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn’t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn’t list the ext-info-s feature as being available in Debian’s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn’t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream’s advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python’s “dead batteries” PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian’s python-wadllib source package to allow its tests to pass. I’ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

Worse Than FailureError'd: Alternative Maths

"Check out Visual Studio optimizing their rating system to only include the ratings used," shared Fiorenzo R. Imagine the performance gain!

0

 

"This sounds about right," says Colin A.

1

 

"Wow! Must snap up some sweet Anker kit with this amazing offer; but less than four days to go!" exclaims Dave L., who then goes on to explain
"The actual WTF is this though. I sent this image to Anker with this email: But only 3days left? I hope this offer continues!
Anker replied: Thank you for your feedback! I understand that you appreciate the savings on the Anker SOLIX PS100 Portable Solar Panel and wish the offer could be extended beyond the current 3-day limit. Your suggestion is valuable and will be considered for future promotions to enhance customer satisfaction. If you have any other requests or need further assistance, please let me know.
I for one welcome our new AI overlords. "

3

 

Graham F. almost stashed this away for later. "Looks like Dropbox could use a few lessons in how to do Maths! Although maybe their definition of 'almost' differs from mine."

4

 

Finally Joshua found time to report a brand-new date-handling bug. "Teams is so buggy; this one just takes the cake. I had to check with the unix cal program to make sure I wasn't completely bonkers." For the readers, November 8 this year is supposed to be a Friday. I suppose things could change after the US election.

2

 


Have a great weekend. Maybe I'll see you next Friday, or maybe all the weekdays will be renamed Thursday.
[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Planet DebianRuss Allbery: Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence

Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99

Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.

These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.

There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.

I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.

I would not waste your time with these even if you are enjoying the novels.

"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.

Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)

"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)

"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)

Rating: 5 out of 10

365 TomorrowsA Chest In A Room

Author: Aubrey Williams The cheap hotel room was draughty, the shadows ink in the recesses. Each sheet of green William Morris wallpaper was peeling in at least three places. For all the dinginess, though, it was a room, and I needed one. By a feeble light I’d tried to work, but the sound of the […]

The post A Chest In A Room appeared first on 365tomorrows.

Rondam RamblingsRon Prognosticates: Trump is Going to Win

 I'm too depressed to elaborate much on this, but I just wanted to go on the record with this prediction before the election.  Why do I think Trump is going to win?  Because DJT stock is up and has been rising steadily since it hit an all-time low in late September.  It didn't even go down today after yesterday's disastrous MSG rally.  The polls have been static since

Planet DebianPaul Wise: FLOSS Activities October 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

All work was done on a volunteer basis.

Planet DebianTaavi Väänänen: Custom domains on the Wikimedia Cloud VPS web proxy

The shared web proxy used on Wikimedia Cloud VPS now has technical support for using arbitrary domains (and not just wmcloud.org subdomains) in proxy names. I think this is a good example of how software slowly evolves over time as new requirements emerge, with each new addition building on top of the previous ones.

According to the edit history on Wikitech, the web proxy service has its origins in 2012, although the current idea where you create a proxy and map it to a specific instance and port was only introduced a year later. (Before that, it just directly mapped the subdomain to the VPS instance with the same name).

There were some smaller changes in the coming years like the migration to acme-chief for TLS certificate management, but the overall logic stayed very similar until 2020 when the wmcloud.org domain was introduced. That was implemented by adding a config option listing all possible domains, so future domain additions would be as simple as adding the new domain to that list in the configuration.

Then the changes start becoming more frequent:

  • In 2022, for my Terraform support project, a bunch of logic, including the list of supported backend domains was moved from the frontend code to the backend. This also made it possible to dynamically change which projects can use which domains suffixes for their proxies.
  • Then, early this year, I added support for zones restricted to a single project, because we wanted to use the proxy for the *.svc.toolforge.org Toolforge infrastructure domains instead of coming up with a new system for that use case. This also added suport for using different TLS certificates for different domains so that we would not have to have a single giant certificate with all the names.
  • Finally, the last step was to add two new features to the proxy system: support for adding a proxy at the apex of a domain, as well as support for domains that are not managed in Designate (the Cloud VPS/OpenStack auth DNS service). In addition, we needed a bit of config to ensure http-01 challenges get routed to the acme-chief instance.

,

Planet DebianGunnar Wolf: Do you have a minute..?

Do you have a minute...?

…to talk about the so-called “Intellectual Property”?

Cryptogram Roger Grimes on Prioritizing Cybersecurity Advice

This is a good point:

Part of the problem is that we are constantly handed lists…list of required controls…list of things we are being asked to fix or improve…lists of new projects…lists of threats, and so on, that are not ranked for risks. For example, we are often given a cybersecurity guideline (e.g., PCI-DSS, HIPAA, SOX, NIST, etc.) with hundreds of recommendations. They are all great recommendations, which if followed, will reduce risk in your environment.

What they do not tell you is which of the recommended things will have the most impact on best reducing risk in your environment. They do not tell you that one, two or three of these things…among the hundreds that have been given to you, will reduce more risk than all the others.

[…]

The solution?

Here is one big one: Do not use or rely on un-risk-ranked lists. Require any list of controls, threats, defenses, solutions to be risk-ranked according to how much actual risk they will reduce in the current environment if implemented.

[…]

This specific CISA document has at least 21 main recommendations, many of which lead to two or more other more specific recommendations. Overall, it has several dozen recommendations, each of which individually will likely take weeks to months to fulfill in any environment if not already accomplished. Any person following this document is…rightly…going to be expected to evaluate and implement all those recommendations. And doing so will absolutely reduce risk.

The catch is: There are two recommendations that WILL DO MORE THAN ALL THE REST ADDED TOGETHER TO REDUCE CYBERSECURITY RISK most efficiently: patching and using multifactor authentication (MFA). Patching is listed third. MFA is listed eighth. And there is nothing to indicate their ability to significantly reduce cybersecurity risk as compared to the other recommendations. Two of these things are not like the other, but how is anyone reading the document supposed to know that patching and using MFA really matter more than all the rest?

Cryptogram Tracking World Leaders Using Strava

Way back in 2018, people noticed that you could find secret military bases using data published by the Strava fitness app. Soldiers and other military personal were using them to track their runs, and you could look at the public data and find places where there should be no people running.

Six years later, the problem remains. Le Monde has reported that the same Strava data can be used to track the movements of world leaders. They don’t wear the tracking device, but many of their bodyguards do.

Worse Than FailureCodeSOD: All the Rest Have 31

Horror movies, as of late, have gone to great lengths to solve the key obstacle to horror movies- cell phones. When we live in a world where help is a phone call away, it's hard to imagine the characters not doing that. So screenwriters put them in situations where this is impossible: in Midsommar they isolate them in rural Sweden, in Get Out calling the police is only going to put our protagonist in more danger. But what's possibly more common is making the film a period piece- like the X/Pearl/Maxxxine trilogy, Late Night with the Devil, or Netflix's continuing series of R.L. Stine adaptations.

I bring this up, because today's horror starts in 1993. A Norwegian software company launched its software product to mild acclaim. Like every company, it had its ups and downs, its successes and missteps. On the surface, it was a decent enough place to work.

Over the years, the company tried to stay up to date with technology. In 1993, the major languages one might use for launching a major software product, your options are largely C or Pascal. Languages like Python existed, but weren't widely used or even supported on most systems. But the company stayed in business and needed to update their technology as time passed, which meant the program gradually grew and migrated to new languages.

Which meant, by the time Niklas F joined the company, they were on C#. Even though they'd completely changed languages, the codebase still derived from the original C codebase. And that meant that the codebase had many secrets, dark corners, and places a developer should never look.

Like every good horror movie protagonist, Niklas heard the "don't go in there!" and immediately went in there. And lurking in those shadows was the thing every developer fears the most: homebrew date handling code.

/// <summary>
/// 
/// </summary>
/// <param name="dt"></param>
/// <returns></returns>
public static DateTime LastDayInMonth(DateTime dt)
{
	int day = 30;
	switch (dt.Month)
	{
		case 1:
			day = 31;
			break;
		case 2:
			if (IsLeapYear(dt))
				day = 29;
			else
				day = 28;
			break;
		case 3:
			day = 31;
			break;
		case 4:
			day = 30;
			break;
		case 5:
			day = 31;
			break;
		case 6:
			day = 30;
			break;
		case 7:
			day = 31;
			break;
		case 8:
			day = 31;
			break;
		case 9:
			day = 30;
			break;
		case 10:
			day = 31;
			break;
		case 11:
			day = 30;
			break;
		case 12:
			day = 31;
			break;
	}
	return new DateTime(dt.Year, dt.Month, day, 0, 0, 0);
}

/// <summary>
/// 
/// </summary>
/// <param name="dt"></param>
/// <returns></returns>
public static bool IsLeapYear(DateTime dt)
{
	bool ret = (((dt.Year % 4) == 0) && ((dt.Year % 100) != 0) || ((dt.Year % 400) == 0));
	return ret;
}

For a nice change of pace, this code isn't incorrect. Even the leap year calculation is actually correct (though my preference would be to just return the expression instead of using a local variable). But that's what makes this horror all the more insidious: there are built-in functions to handle all of this, but this code works and will likely continue to work, just sitting there, like a demon that we've made a pact with. And suddenly we realize this isn't Midsommar but Ari Aster's other hit film, Hereditary, and we're trapped being in a lineage of monsters, and can't escape our inheritance.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsImagine a Creature

Author: Rollin T. Gentry Imagine a creature crafted from crushed bones and entropy. It may or may not have fangs, or claws, or even a face. It rides from calamity to calamity, crisis to crisis, along ley lines the scale of galaxies. Wait. There he is, knocking at the door. The door, an ancient relic […]

The post Imagine a Creature appeared first on 365tomorrows.

Cryptogram Simson Garfinkel on Spooky Cryptographic Action at a Distance

Excellent read. One example:

Consider the case of basic public key cryptography, in which a person’s public and private key are created together in a single operation. These two keys are entangled, not with quantum physics, but with math.

When I create a virtual machine server in the Amazon cloud, I am prompted for an RSA public key that will be used to control access to the machine. Typically, I create the public and private keypair on my laptop and upload the public key to Amazon, which bakes my public key into the server’s administrator account. My laptop and that remove server are thus entangled, in that the only way to log into the server is using the key on my laptop. And because that administrator account can do anything to that server­—read the sensitivity data, hack the web server to install malware on people who visit its web pages, or anything else I might care to do­—the private key on my laptop represents a security risk for that server.

Here’s why it’s impossible to evaluate a server and know if it is secure: as long that private key exists on my laptop, that server has a vulnerability. But if I delete that private key, the vulnerability goes away. By deleting the data, I have removed a security risk from the server and its security has increased. This is true entanglement! And it is spooky: not a single bit has changed on the server, yet it is more secure.

Read it all.

,

LongNowEnlarging the Question

💡
FIRST LOOK: CENTURIES OF THE BRISTLECONE
Coming Spring 02025

An exhibition by artist and experimental philosopher Jonathon Keats co-commissioned by The Long Now Foundation and the Center for Art + Environment at the Nevada Museum of Art

An 18-foot tall dual pendulum clock that measures the growth of the world's most ancient living trees, exploring new ways of thinking about deep time and resilience. 

Sign up for our newsletter to learn more and join us this spring for our grand opening.
Enlarging the Question

At the summit of eastern Nevada’s Mount Washington, a grove of bristlecone pine trees bears witness to millennia of change. Perched precariously along ridges of limestone, battered by harsh winds, the gnarled forms that populate Long Now’s Bristlecone Preserve can look more like abstract sculptures than living organisms. But they are alive, have been alive, some since before the first stone of the Great Pyramid of Giza was laid 4,500 years ago. And they are growing. Very. Slowly. A sapling from today would potentially not reach maturity until the year 07000. 

But to speak of years like 07000 is to speak in human time. Bristlecone time is not like our time. In 01964, a geographer took core samples of a nearby bristlecone known as Prometheus. The tree had 4,862 growth rings. This did not, as one might assume, mean that the tree was 4,862 years old. Because of the harsh conditions, and the high elevation, some bristlecone pines grow so slowly that they don’t form a tree ring each year. Such was the case with Prometheus, whom researchers later estimated to be closer to 4,900 years old. 

The discrepancy between human time — in which a year is exactly 365.2425 days in duration — and bristlecone time — which varies depending on environmental conditions — is the focus of a forthcoming project from the conceptual artist and experimental philosopher Jonathon Keats, The Long Now Foundation, and the Nevada Museum of Art. Centuries of the Bristlecone empowers the longest-lived organisms on Earth to be timekeepers. A living calendar for the next five millennia, the project will measure the growth of select bristlecone pine trees at Long Now’s Bristlecone Preserve. Those measurements — “bristlecone time” — will be transmitted to an 18-foot tall dual pendulum clock housed at the Nevada Museum of Art. The growth of these trees will tell a story. What that story is depends on us.

Enlarging the Question
Long Now’s Bristlecone Preserve, Mount Washington, eastern Nevada. Photo by Ian van Coller

“Through time, each bristlecone will bear witness to human activity in the Anthropocene,” Keats has written. “The meaning of the living calendar will change with the changes we bring to the environment.”

Consider again that sapling. Over time, increased carbon dioxide in the atmosphere stemming from anthropogenic climate change would lead to it growing at a faster rate, much like its siblings at lower elevations. A visitor to the Centuries of the Bristlecone clock a hundred years from now would see two different times displayed, side by side. The dial displaying human, or standard, time would read “02124.” The dial displaying bristlecone, or arboreal, time might read “02377.” 

Or it might not. We cannot know how the future will unfold. And we could, of course, choose to act differently. For Keats, that’s precisely the point. 

Enlarging the Question
The gnarled forms that populate Long Now’s Bristlecone Preserve can look more like abstract sculptures than living organisms. Photo by Justin Oliphant

“Our actions will affect bristlecone time,” Keats writes. “And while we need to be aware of our hubris, we also need to be aware that we have choices and responsibilities. Arboreal time will provide us with an ecological feedback mechanism. Sentinels from the distant past that will long outlive us, the bristlecones will calibrate our time on this planet.”

Centuries of the Bristlecone has been in the works since 02015, when Keats shared his vision during a Long Now Talk at The Interval. In September 02024, a contingent of staff from Long Now and the Nevada Museum of Art joined Keats atop Mount Washington to help realize that vision, installing the indexes and plaques that will allow future citizen-timekeepers to chart the growth of the trees. In the spring of 02025, the municipal clock at the Nevada Museum of Art will open to the public. 

Centuries of the Bristlecone is part of Keats’ broader philosophical exploration of time from a more-than-human perspective. “The overarching goal is to reverse the process of human alienation that began by seeing nature as other,” he says. “We can reintegrate ourselves into nature by reintegrating nature into human systems.”

Recently, Keats sat down with William L. Fox, the Director of the Center for Art + Environment at the Nevada Museum of Art, to discuss the many projects he’s undertaken to achieve that goal, as well as the unconventional thought experiments that comprise his larger body of work. Over the years, Keats has attempted to genetically engineer God; copyright his brain in a bid to become immortal; and pass Aristotle’s law of identity as a law of the legal system (violators caught being unidentical to themselves would be fined one-tenth of a cent). He has created pinhole cameras with exposure times of one thousand years, and he has shown pornography to house plants (which is to say, videos of bees pollinating flowers). 

Equal parts playful and profound, Keats’ interventions open up spaces for the public to engage in contemplative inquiry across a wide swath of disciplines and domains, from the perennial questions posed by philosophy — What is the relationship between thinking and being? — to the ethical quandaries posed by the Anthropocene — How might non-human species participate in the collective decision-making of the democratic system in which we live?  

“A question is never resolved,” he says. “It is only enlarged.”

The following conversation has been edited for length and clarity.

Enlarging the Question
William L. Fox speaking at Long Now on April 5, 02016. Seated in the audience at left is Jonathon Keats. Photo by Gary Wilson

William L. Fox: You and I have been working together for years, but we don’t sit down and actually talk about what childhood was like, what grade school was like. I’d like to remedy that now. So let’s start with how you must’ve driven every teacher you’ve ever had absolutely nuts.

Jonathon Keats: It started with my parents. I drove them crazy long before I had teachers to distract and classes to disturb. But in terms of the first experience in a formal educational situation, it was preschool. As is the case in many Montessori schools, I got told by the teachers how to be creative. What more creative thing can one do than to rebel against that?

It didn't go over well. I actually didn't speak for an entire year. I would speak outside of class, but the moment I walked through the doors of the school, I would stop speaking. Later, when I got my hands on a Diagnostic and Statistical Manual of Mental Disorders, I was able to diagnose myself as having elective mutism. I was quite pleased with myself to be an elective mute because knowing when not to say something seems like it is as important as knowing when to say something. That is one of the essential qualities of my work and one of the essential qualities that I seek in art more generally.

As I went on in that vein, being obstinate whenever I was asked to be creative or imaginative, one of the preschool teachers asked whether I had an imagination. I think that's still open to debate. Nevertheless, it was clear that any sort of formal structure that came from someone else was one to be resisted or to be broken free of, as opposed to my own systems that I very much wanted to create.

The first work that could potentially be categorized retrospectively as an artwork — or as a thought experiment — came shortly after moving cross country from New York City, where I'd gone to preschool, to Corte Madera, a very quaint town in California. In my driveway, on a street that few people frequented, I set up a table and put some rocks on it and priced the rocks at one cent apiece. The rocks on the ground were identical, but were not the ones that were for sale, so they had no price on them whatsoever. And so I went into the business of selling rocks to a market that was effectively zero. There was, I think, a neighbor who came up to water the lawn at some point. But, more than a profit-making enterprise, my venture was a way in which to ask fundamental questions about economics, which probably originated with my puzzlement about what my father did for a living as a stockbroker. What does it mean to buy and sell? What is the nature of money?

Even then, the way in which I went about investigating the world was on my own terms, creating some sort of alternative reality that others could enter into with me, where I eliminated as much as possible that seemed extraneous, leaving just the essence to try to make sense of. I think that has been the case ever since.

Fox: The most valid rubric I use to describe you is as an ‘experimental philosopher.’ Clearly, that’s where you’ve been going since Montessori preschool. By the time you get to high school, have you begun, within that cloud of possibilities, to make some choices about what you want to do?

Keats: At the time, I was very interested in law and governance, which are deeply interesting to me still — not only as subjects, but also as constructs at a meta-level: How is the world ordered? What sort of sense do we make of the world through the systems we have, and how do we interrogate those systems? How do we ask how those systems work and what they do in order to speculate on the ways in which they might achieve what we actually want them to do?

All too often, there are legacy systems built on legacy systems, and they’re not functioning as intended. We can see this on a day-to-day basis, but we won’t understand why until we start to look at what is invisible to us. It’s like the operating system on a computer: We might not know how it operates, but it structures our word processing, our web browsing, et cetera. Law was a particularly interesting area for me because it was structured, and because it structured everything else.

During the summer of my junior year, I interned at the City Attorney’s Office in San Francisco.  They must not have been very well funded, because they would tell me about cases and then set me free in their law library to write memoranda that I would dictate into a Dictaphone. These were often on rather arcane areas of law, such as trademark infringement, but there were also more conventional problems, such as the liability of the city when a bus driver ran over a pedestrian. So I ended up with an informal education in the law, both in how the law is structured and how it actually functions.

In terms of actual schoolwork, I was very keen to go to the high school that I did — Lick-Wilmerding — for manifestly other reasons: it had a magnificent shop program, with a whole room of World War II-era lathes. That was useful not only from the standpoint of learning how to make things, but in terms of learning the procedures. When you're working in a machine shop, you have to think about what you are trying to create in a way that is extremely orderly, considering the stages underlying the manufacture of a given part and considering how multiple parts will fit together. So while I wasn't thinking in these terms at the time, in making things out of wood, metal, and other materials, I was, in very physical and tangible ways, trying to make sense of how systems come into being, what they do, and where they break down.

Fox: And you move on into college, and the adventures continue.

Keats: They do. They travel with me to the East Coast, to Amherst College in western Massachusetts. It was an ideal setting for exploring whatever interested me. That’s the nature of a liberal arts college when you take the mandate seriously, and most of my professors did. They were perfectly happy to provide guidance, but were seemingly equally happy not to do so, and to allow much of my education to become a form of independent study.

Amherst is where I learned philosophy, and where I learned that I did not want to practice philosophy within academia. Formal logic is not my forte. And then there was the fact that philosophy at Amherst was analytic and highly technical. And while I found Ludwig Wittgenstein fascinating — he once asked, What time is it on the sun? — for the most part the way in which philosophy was done in school was not at all like what I had imagined. What I had envisioned was probably not so far off from selling rocks on the street corner. As far as I was concerned, philosophy was about asking questions and enticing others to try and make sense of that world with me.

The thought experiment was, to me, an incredibly interesting means of making sense that was used in a way that was not at all interesting. It was used as a mode of argumentation — reductio ad absurdum — as a way of rhetorically drawing somebody into a state of contradiction. I was interested in the thought experiment as a mode of open-ended experimentation. And so I got enough training in philosophy — enough language, enough rigor — to be able to smuggle philosophy out of academia. Breaking free was also important for another reason: Whenever I talked to anybody outside of my department, including classmates and my parents, they had no idea what I was talking about. Partly, I think that’s because I was never very good at paraphrasing others’ philosophy, but partly it’s because analytic philosophy was so abstruse. 

As I said earlier, we need to get inside the operating system. We need to be able to understand the basis of our understanding. There's so much scholarship underlying philosophy as it's done right now that “good philosophy” is directed by what was considered worthy in the past.  We need to go in other directions, and to do so with others in a way that’s socially engaged, such that we’re all philosophers together.

I declared my independence from philosophy my senior year by opting to write a thesis on aesthetics, which was one of the areas I’d studied. In my proposal to the philosophy department, I argued that it made no sense to write about aesthetics; I should be working within aesthetics. That is, I should be writing a novel. The philosophy department responded by saying, That’s a very good idea, but not here. So I formed my own aesthetics department. I gave it a name and had a philosophy professor on the board. I wrote a novel, or something that passed as a novel, as a senior thesis. That was the moment when I realized that writing was one way in which to pursue what I wanted and needed to do. Writing fiction and poetry was particularly generative because it avoided some of the necessities of argumentation, namely first and foremost that one has something one is arguing for, as opposed to trying to open up a space for reflection.

But I also realized that beyond writing, other arts presented great opportunities. I had studied enough art history in college to see that the Duchampian turn was so dizzying that nobody knew what art was anymore. Every other discipline, from physics to philosophy, had become more disciplined, more rigorous, more rigid, and more narrow as time had gone on. Art had gone the opposite direction, from producing painting or sculpture in an academic tradition to “anything goes”.

Fox: You have just proposed a kind of analog to the working practice of Allan Kaprow and his relationship to William James and the birth of American pragmatism. Which is to say, in counter distinction: when I was at the Clark Art Institute, I had a good friend who was a curator of art from Bordeaux at the Contemporary Art Museum. He was the last student of Deleuze. And he said, “You don't like Deleuze and Guattari very much, do you?” And I said, “No, I loathe them. And in fact, I threw away A Thousand Plateaus.” It's the only book in my life I've ever thrown in a trash basket. And he said, “Why on earth? What's your problem?” And I said, “Because they don't tell the truth. They use language in very clever ways. But you cannot argue about whether or not there's a river that flows from the Rocky Mountains to the Pacific Ocean. And they would pretend to do otherwise.” And so he said, “But Bill, you don't understand: the whole point is the person who argues the best wins.” I found that instructive. And to hear you actually anchor yourself in the world in a philosophical tradition that is not founded on argumentation is refreshing.

Keats: I think that argumentation is at the core of my practice, but not for the sake of winning. I’m drawn to the Hegelian dialectic and even more to the Talmudic tradition in which any point is a basis for a counterpoint. A question is never resolved. It’s only enlarged.

What I do in much of my work now is that I take a position internally —  a proposition, a provocation, or a world that I create — not because I think that it is definitive, but because I think that it is a point of departure for navigating a space that I intuit to be meaningful, relevant and interesting. I seldom know my way around the space at the outset, I only know that I can’t navigate it alone. I know that it needs to be large enough for me to get lost.

Enlarging the Question

In Berkeley in 02002, I tried to find my way through the legal system. I attempted to pass a law of logic: the proposition that a equals a, that every entity is identical to itself. I held a petition drive and set up a table piled high with political buttons. It wasn’t so different from my childhood experiment of setting up a stand on the street and selling rocks as a way of understanding what money is; the rocks were meaningless except for the transaction that was happening through their sale. Equivalent to that, in trying to pass a law of logic as legislation, I was trying to figure out whether we actually can make laws, or whether they already all exist and we simply elect certain laws to be those that we follow.

Fox: One of the things you’ve done is copyright your brain.

Keats: My motivation was to explore some of the questions that have persisted for such a long time: what it is to think, what it is to be, and what is the relationship between the two? But also it was about trying to figure out the nature of intellectual property.

Instead of trying to achieve immortality through the merits of my paintings or sculptures, as artists often do, I opted to enlist the Copyright Act of 01978, which afforded copyright protection on any work for 70 years beyond the artist’s death. I submitted paperwork to the Copyright Office registering my brain as a sculpture that was formed through the act of thinking. I hypothesized that this sculpture, by virtue of being copyrighted, and through the magic of cogito ergo sum, could become a way to outsurvive myself by 70 years.

At the same time that I registered my brain with the Copyright Office to protect the neural networks, I orchestrated an IPO offering futures contracts on my individual neurons. The neural networks were really what mattered after I was dead; the ability to use those networks after my death would be essential to fulfilling the cogito and continuing to exist exclusively as myself for those 70 years. But in order to be able to fund suitable technology, as well as suitable legal protections, I needed some sort of a cash windfall at the end of my life. (Being an artist, as we all know, is not a way to get rich.) Investors were offered the opportunity to purchase a million neurons at a $10 premium against a $10,000 strike price. The neurons were, and remain, deliverable upon my death.

Fox: I’d like to talk about trees. You and I have both been involved in the UC Berkeley Sagehen Creek Field Station that is north of Truckee, California. At one point, you wanted to allow trees to have agency about the quality of their environment, giving them the ability to vote in a countywide election. Jeff Brown and Faerthen Felix, the then-director and manager, respectively, of the Sagehen Creek Station, not only let you set up camp there, but brought you in contact with scientists and instruments that could facilitate that process.          

Keats: For a while I've been trying to figure out how to move beyond rights of nature. I’ve been trying to take a broader view of ecology, considering how we’re making life worse, not only for ourselves, but for most every species on planet Earth through our actions today and arguably since the Industrial Revolution.

From an ecological perspective, giving trees the right to clean air is certainly a step in the right direction: it allows for beings in jeopardy to be protected in a court of law, and their interests to be protected in very broad terms, much as rights apply to humans. But there is something essential missing from the equation, and it has to do with representation. In other words: how might non-human species be able to participate in the collective decision-making of the democratic system in which we live?

Enlarging the Question
Promotional still for Keats’ latest exhibition, “The Future Democracies Laboratory,” hosted at Modernism Gallery on October 30, 02024 and on view at the Institute of Contemporary Art San José through February 23, 02025.

We don’t really know much about what happens on this planet, let alone what is in the best interest of non-human others. If we want to make good policy, we need to be able to access the extraordinary range of sensory systems and ways in which these non-human beings make sense of the world. And, at an ethical level, these others are affected by our actions, and should, therefore, have a say in what actions are taken.

When I first approached Jeff and Faerthen — and when they introduced me to Earth Law Center in Colorado — I was just beginning to develop ideas for enlarging democratic decision-making processes. Starting with plants made sense because of the fact that we humans are less than 1% of Earth’s biomass and plants are by some measures more than 80%. In other words, they’re the majority.

I started to think about plants’ participation in the democratic process initially in terms of an old electoral cliche : Are you better off now than you were four years ago? People supposedly ask themselves that question in presidential elections. How might we pose that question to plants?

I think the question could be reformulated as follows: Are you getting more stressed or less so as a result of the political decisions that are being made on your behalf in our representative democracy? All species can be monitored in terms of stress level. The hormone cortisol, for instance, is correlated with stress in the case of animals. Plants experience stress as well, as indicated by their production of phytohormones such as ethylene. Measuring these hormones might be a substitute for lining up plants at the voting booth and waiting for them to pull a lever.

Enlarging the Question
Keats’ exhibit at MOD, “The Assembly of Trees.” Photo by Topbunk

It’s a thought experiment, but one I am undertaking in public at MOD, an art-and-science museum at the University of South Australia. All this year in Adelaide, 50 trees are being monitored. We aren't monitoring phytohormones, which are difficult to measure directly. Instead, we’re observing an epiphenomenon: foliage density. We're looking at whether there’s more or less foliage this year compared to last year as a proxy measure of stress. And we're inviting visitors to correlate these changes with new legislation.  

To legally enfranchise nonhuman species would probably take a constitutional amendment, an idea that we’ve been investigating at Earth Law Center. It’s an ideal but it’s not going to be approved by the electorate anytime soon. On the other hand, it seems eminently feasible to influence people’s political decisions by making them more aware of the ecosystem in which they live such that they can incorporate the interests and worldview of other species at the polls. The MOD installation is intended to encourage people to take nonhuman interests and perspectives into account when they vote.

The overarching goal is to reverse the process of human alienation that began by seeing nature as other. We can reintegrate ourselves into nature by reintegrating nature into human systems.

Enlarging the Question
A bristlecone pine in Long Now's Bristlecone Preserve. Photo by Justin Oliphant

Fox: From my standpoint, Centuries of the Bristlecone is a project that came about because you wanted to find a way to demonstrate in front of humans in real time the difference between human time and bristlecone time. If I remember correctly, you originally wanted to work with sequoias or redwoods or other species, but The Long Now Foundation said, “We own the largest private grove of bristlecones in the world,” and that’s a 5,000-year potential growth pattern for a plant. 

You were looking for a place where you could take a signal from a bristlecone pine, let’s say the growth of a tree ring annually, as an indicator of the chemical composition of the atmosphere around the bristlecone. And you could put those two facts together and measure a correlation. But all this would be happening on top of an 11,000 foot mountain. How could you get that data and that ongoing signal to the public?

The answer was, find an organization that was nearby in the Nevada Museum of Art. We’re about as close a museum to the bristlecone pines as you can find in this state. And so we began to talk about a device that would translate and make visible that data for people to come in and apprehend on a regular basis or even on a one-time visit, just to get a sense of what the different kinds of time were. It’s an exquisite instrument that’s been designed. It’s taken us years to get here, and it’s a monumental public clock that has both human and bristlecone time being displayed on the face of that clock.

What’s going through your mind as you are coming up with the idea of Centuries of the Bristlecone?

Keats: I’m concerned about the ways in which societies have kept time since the beginning of the Industrial Revolution, by the mechanization and standardization of time through the use of mechanical, electronic, and atomic clocks. As time became more technical, it became more abstract. Like many technologies, the techno­logy of timekeeping allowed us to disconnect from planetary systems and do what we want to do whenever we want to do it. In modern logistics, there are no temporal feedback loops to indicate the impact of our actions.

💡
WATCH David Rooney’s 02021 Long Now Talk on how time has been imagined, politicized, and weaponized over the centuries — and how it might bring peace.

In the past (pre-classical Greece, say), and still in some indigenous societies today, time reckoning has very much been about observing phenomena in your midst. Time is embedded in planetary systems and in how other creatures are experiencing these systems together with humans, all living in a state of kinship. 

I want to reintegrate modern society into those planetary systems. I want to do so through law and governance, but also through the mechanism of timekeeping.

Imagine a sapling. If we were to put markers around a tree in the shape of a spiral, and we were to mark them with future dates based on the current average annual growth for that tree, and we were then to stand back and give the tree authority to let us know what time it is, the arboreal year might deviate from the Gregorian calendar. And it would do so in ways that would be meaningful because this would be the ground truth for the tree, influenced by essential factors ranging from precipitation to the amount of carbon dioxide in the air. It would be the tree’s experience of time, as legitimate and relevant as any other experience of time. The calendar would be a way to vicariously experience time that is being experienced by others, such that time becomes a relationship. Ultimately, this is how we’ve used time amongst humans, but it needs to be enlarged in terms of who is using and construing time together.

Enlarging the Question

I’d initially been inclined to work with redwood trees because of a talk I gave at the College of the Redwoods years ago. In 02015, I was invited to give a talk at The Long Now Foundation. They’d heard about cameras I’d been making with hundred- and thousand-year-long exposure times. I came in saying that I’d like to propose something new rather than just talk about projects I’d done before. At that initial meeting, I re-encountered Alexander Rose, with whom I’d gone to grade school, and who had subsequently become the Executive Director of Long Now

As I told him my ideas about redwood time reckoning, he mentioned the bristlecones. Immediately I knew that those were the trees. He told me about Mount Washington. Immediately I knew that that was the site. It all became obvious. It made perfect sense to do this on Mount Washington, and as you said, to work with a museum. The Nevada Museum of Art’s Center for Art + Environment was perfect because of the proximity.

For all these reasons I took a road trip to Reno with Alexander and Michael McElligott, who at the time was leading Long Now’s Interval lecture series. We made a presentation, and were met with silence. At first we thought it was befuddlement, but it turned out to be the silence of people giving serious thought to our proposal. Before we left, they said yes.

And that’s when you and I started talking. We talked about how the clock needed to be monumental in order to bring people together. It needed to have the monumental scale of a municipal clock. One of the most important decisions was to engage the master clockmaker Phil Abernethy and the antiquarian horologist Brittany Nicole Cox, who have the skills to make this mechanism a reality.

Enlarging the Question
A rendering of the Centuries of the Bristlecone clock that will be on display at the Nevada Museum of Art. Render by clockmaker Phil Abernethy 

Centuries of the Bristlecone will be a communal gathering point for a new time protocol. Each year or two, we’ll make a trip to Mount Washington, and get the measure of time from the trees by taking a microcore. The clock has a mechanism to measure and record the growth rate shown in the most recent tree ring, and to translate it into the rate at which a pendulum swings. This clock rate will also be available online for people to calibrate their smartphone, their watch, their scheduling software.

But trees are only one dimension of the project. I’ve also been working on a system that correlates the flow of time with the flow of a river. From minute to minute, the clock is unpredictable because the flow of rivers is stochastic, encouraging people to be in the moment. Over the long term, the time indicates changes in the climate through the impact of climate change on glacier melt, rainfall, and groundwater. Like the calendar around the sapling, the calendar on this clock provides an environmental feedback loop.

Several years ago, we projected the first instantiation of this clock onto the front of the Anchorage Museum, indicating time based on the flow of five rivers in Alaska. It was the first visible sign of what time might look like if it were not homogenized like Universal Coordinated Time, of what time might look like if we understood time to be pluralistic. I've also been collaborating on performances on rivers in Atlanta, calibrated by the flow of the Chattahoochee and its tributaries. And I'll be installing two erosion calendars in Atlanta in 02025 and 02026.

Time exists as a conversation between myriad beings and living systems. The conversation becomes accessible to us through a vernacular that we know. A system that is familiar to all humans draws us out into the world while simultaneously bringing the world into our lives.

Krebs on SecurityChange Healthcare Breach Hits 100M Americans

Change Healthcare says it has notified approximately 100 million Americans that their personal, financial and healthcare records may have been stolen in a February 2024 ransomware attack that caused the largest ever known data breach of protected health information.

Image: Tamer Tuncay, Shutterstock.com.

A ransomware attack at Change Healthcare in the third week of February quickly spawned disruptions across the U.S. healthcare system that reverberated for months, thanks to the company’s central role in processing payments and prescriptions on behalf of thousands of organizations.

In April, Change estimated the breach would affect a “substantial proportion of people in America.” On Oct 22, the healthcare giant notified the U.S. Department of Health and Human Resources (HHS) that “approximately 100 million notices have been sent regarding this breach.”

A notification letter from Change Healthcare said the breach involved the theft of:

-Health Data: Medical record #s, doctors, diagnoses, medicines, test results, images, care and treatment;
-Billing Records: Records including payment cards, financial and banking records;
-Personal Data: Social Security number; driver’s license or state ID number;
-Insurance Data: Health plans/policies, insurance companies, member/group ID numbers, and Medicaid-Medicare-government payor ID numbers.

The HIPAA Journal reports that in the nine months ending on September 30, 2024, Change’s parent firm United Health Group had incurred $1.521 billion in direct breach response costs, and $2.457 billion in total cyberattack impacts.

Those costs include $22 million the company admitted to paying their extortionists — a ransomware group known as BlackCat and ALPHV — in exchange for a promise to destroy the stolen healthcare data.

That ransom payment went sideways when the affiliate who gave BlackCat access to Change’s network said the crime gang had cheated them out of their share of the ransom. The entire BlackCat ransomware operation shut down after that, absconding with all of the money still owed to affiliates who were hired to install their ransomware.

A breach notification from Change Healthcare.

A few days after BlackCat imploded, the same stolen healthcare data was offered for sale by a competing ransomware affiliate group called RansomHub.

“Affected insurance providers can contact us to prevent leaking of their own data and [remove it] from the sale,” RansomHub’s victim shaming blog announced on April 16. “Change Health and United Health processing of sensitive data for all of these companies is just something unbelievable. For most US individuals out there doubting us, we probably have your personal data.”

It remains unclear if RansomHub ever sold the stolen healthcare data. The chief information security officer for a large academic healthcare system affected by the breach told KrebsOnSecurity they participated in a call with the FBI and were told a third party partner managed to recover at least four terabytes of data that was exfiltrated from Change by the cybercriminal group. The FBI declined to comment.

Change Healthcare’s breach notification letter offers recipients two years of credit monitoring and identity theft protection services from a company called IDX. In the section of the missive titled “Why did this happen?,” Change shared only that “a cybercriminal accessed our computer system without our permission.”

But in June 2024 testimony to the Senate Finance Committee, it emerged that the intruders had stolen or purchased credentials for a Citrix portal used for remote access, and that no multi-factor authentication was required for that account.

Last month, Sens. Mark Warner (D-Va.) and Ron Wyden (D-Ore.) introduced a bill that would require HHS to develop and enforce a set of tough minimum cybersecurity standards for healthcare providers, health plans, clearinghouses and businesses associates. The measure also would remove the existing cap on fines under the Health Insurance Portability and Accountability Act, which severely limits the financial penalties HHS can issue against providers.

According to the HIPAA Journal, the biggest penalty imposed to date for a HIPAA violation was the paltry $16 million fine against the insurer Anthem Inc., which suffered a data breach in 2015 affecting 78.8 million individuals. Anthem reported revenues of around $80 billion in 2015.

A post about the Change breach from RansomHub on April 8, 2024. Image: Darkbeast, ke-la.com.

There is little that victims of this breach can do about the compromise of their healthcare records. However, because the data exposed includes more than enough information for identity thieves to do their thing, it would be prudent to place a security freeze on your credit file and on that of your family members if you haven’t already.

The best mechanism for preventing identity thieves from creating new accounts in your name is to freeze your credit file with Equifax, Experian, and TransUnion. This process is now free for all Americans, and simply blocks potential creditors from viewing your credit file. Parents and guardians can now also freeze the credit files for their children or dependents.

Since very few creditors are willing to grant new lines of credit without being able to determine how risky it is to do so, freezing your credit file with the Big Three is a great way to stymie all sorts of ID theft shenanigans. Having a freeze in place does nothing to prevent you from using existing lines of credit you may already have, such as credit cards, mortgage and bank accounts. When and if you ever do need to allow access to your credit file — such as when applying for a loan or new credit card — you will need to lift or temporarily thaw the freeze in advance with one or more of the bureaus.

All three bureaus allow users to place a freeze electronically after creating an account, but all of them try to steer consumers away from enacting a freeze. Instead, the bureaus are hoping consumers will opt for their confusingly named “credit lock” services, which accomplish the same result but allow the bureaus to continue selling access to your file to select partners.

If you haven’t done so in a while, now would be an excellent time to review your credit file for any mischief or errors. By law, everyone is entitled to one free credit report every 12 months from each of the three credit reporting agencies. But the Federal Trade Commission notes that the big three bureaus have permanently extended a program enacted in 2020 that lets you check your credit report at each of the agencies once a week for free.

MELinks October 2024

Dacid Brin wrote an interesting article about AI ecosystems and how humans might work with machines on creative projects [1]. Also he’s right about “influencers” being like funghi.

Cory Doctorow wrote an interesting post about DRM, coalitions, and cheating [2]. It seems that people like me who want “trusted computing” to secure their own computers don’t fit well in any of the coalitions.

The CHERI capability system for using extra hardware to validate jump addresses is an interesting advance in computer science [3]. The lecture is froim the seL4 Summit, this sort of advance in security goes well with a formally proven microkernel. I hope that this becomes a checkbox when ordering a custom RISC-V design.

Bunnie wrote an insightful blog post about how the Mossad might have gone about implementing the exploding pager attack [4]. I guess we will see a lot more of this in future, it seems easy to do.

Interesting blog post about Control Flow Integrity in the V8 engine of Chrome [5].

Interesting blog post about the new mseal() syscall which can be used by CFI among other things [6].

This is the Linux kernel documentation about the Control-flow Enforcement Technology (CET) Shadow Stack [7]. Unfortunately not enabled in Debian/Unstable yet.

ARM added support for Branch Target Identification in version 8.5 of the architecture [8].

The CEO of Automatic has taken his dispute with WPEngine to an epic level, this video catalogues it, I wonder what is wrong with him [9].

NuShell is an interesting development in shell technology which runs on Linux and Windows [10].

Interesting article about making a computer game without coding using ML [11]. I doubt that it would be a good game, but maybe educational for kids.

Krebs has an insightful article about location tracking by phones which is surprisingly accurate [12]. He has provided information on how to opt out of some of it on Android, but we need legislative action!

Interesting YouTube video about how to make a 20kW microwave oven and what it can do [13]. Don’t do this at home, or anywhere else!

The Void editor is an interesting project, a fork of VSCode that supports DIRECT connections to LLM systems where you don’t have their server acting as a middle-man and potentially snooping [14].

Worse Than FailureCodeSOD: A Base Nature

Once again, we take a look at the traditional "if (boolean) return true; else return false;" pattern. But today's, from RJ, offers us a bonus twist.

public override bool IsValid
{
   get
   {
      if (!base.IsValid)
         return false;

      return true;
   }
}

As promised, this is a useless conditional. return base.IsValid would do the job just as well. Except, that's the twist, isn't it. base is our superclass. We're overriding a method on our superclass to… just do what the base method does.

This entire function could just be deleted. No one would notice. And yet, it hasn't been. Everyone agrees that it should be, yet it hasn't been. No one's doing it. It just sits there, like a pimple, begging to be popped.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Time Capsule

Author: Milo Brown William Smith was very proud of his name, not because it was a very good name (although it was) but because it granted him a certain level of anonymity. In William’s opinion, the only better name would be John Doe, since the name John Smith was made famous, and in turn infamous, […]

The post The Time Capsule appeared first on 365tomorrows.

Planet DebianDirk Eddelbuettel: gcbd 0.2.7 on CRAN: More Mere Maintenance

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that ‘yes indeed’ we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The ‘configure / rebuild R for xyz’ often seen with ‘xyz’ being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs – see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Worse Than FailureRepresentative Line: On the Log, Forever

Jon recently started a new project. When setting up his dev environment, one of his peers told him, "You can disable verbose logging by setting DEBUG_LOG=false in your config file."

Well, when Jon did that, the verbose logging remained on. When he asked his peers, they were all surprised to see that the flag wasn't turning off debug logging. "Hunh, that used to work. Someone must have changed something…" Everyone had enough new development to do that tracking down a low priority bug fell to Jon. It didn't take long.

const DEBUG_LOG = process.env.DEBUG_LOG || true

According to the blame, the code had been like this for a year, the commit crammed with half a dozen features, was made by a developer who was no longer with the company, and the message was simply "Debugging". Presumably, this was intended to be a temporary change that accidentally got committed and no one noticed or cared.

Jon fixed it, and moved on. There was likely going to be plenty more to find.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Trees Are Chatty

Author: Majoki “What a poetic way of expressing it, Sibyl,” Cassie warily admitted. She was walking along the stream that meandered through the glade, the aspens chattering in the stiffening evening breeze. *It’s true, Cassandra. The trees are chatty. They’re discussing the gathering storm.* Cassie tilted her head, as she did every time, Sibyl voiced […]

The post The Trees Are Chatty appeared first on 365tomorrows.

Cryptogram Law Enforcement Deanonymizes Tor Users

The German police have successfully deanonymized at least four Tor users. It appears they watch known Tor relays and known suspects, and use timing analysis to figure out who is using what relay.

Tor has written about this.

Hacker News thread.

Cory DoctorowSpill, part four (a Little Brother story)

Will Staehle's cover for 'Spill': a white star on an aqua background; a black stylized fist rises out of the star with a red X over its center.

This week on my podcast, I read part four of “Spill“, a new Little Brother story commissioned by Clay F Carlson and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original.

I didn’t plan to go to Oklahoma, but I went to Oklahoma.

My day job is providing phone tech support to people in offices who use my boss’s customer-relationship management software. In theory, I can do that job from anywhere I can sit quietly on a good Internet connection for a few hours a day while I’m on shift. It’s a good job for an organizer, because it means I can go out in the field and still pay my rent, so long as I can park a rental car outside of a Starbucks, camp on their WiFi, and put on a noise-canceling headset. It’s also good organizer training because most of the people who call me are angry and confused and need to have something difficult and technical explained to them.

My comrades started leaving for Oklahoma the day the Water Protector camp got set up. A lot of them—especially my Indigenous friends—were veterans of the Line 3 Pipeline, the Dakota Access Pipeline, and other pipeline fights, and they were plugged right into that network.

The worse things got, the more people I knew in OK. My weekly affinity group meeting normally had twenty people at it. One week there were only ten of us. The next week, three. The next week, we did it on Zoom (ugh) and most of the people on the line were in OK, up on “Facebook Hill,” the one place in the camp with reliable cellular data signals.


MP3

,

Planet DebianSven Hoexter: GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

Cryptogram Criminals Are Blowing up ATMs in Germany

It’s low tech, but effective.

Why Germany? It has more ATMs than other European countries, and—if I read the article right—they have more money in them.

Planet DebianThomas Lange: 30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

Worse Than FailureCodeSOD: Trophy Bug Hunting

Quality control is an important business function for any company. When your company is shipping devices with safety concerns, it's even more important. In some industries, a quality control failure is bound to be national headlines.

When the quality control software tool stopped working, everyone panicked. At which point, GRH stepped in.

Now, we've discussed this software and GRH before, but as a quick recap, it was:

written by someone who is no longer employed with the company, as part of a project managed by someone who is no longer at the company, requested by an executive who is also no longer at the company. There are no documented requirements, very few tests, and a lot of "don't touch this, it works".

And this was a quality control tool. So we're already in bad shape. It also had been unmaintained for years- a few of the QC engineers had tried to take it over, but weren't programmers, and it had essentially languished.

Specifically, it was a quality control tool used to oversee the process by about 50 QC engineers. It automates a series of checks by wrapping around third party software tools, in a complex network of "this device gets tested by generating output in program A, feeding it to program B, then combining the streams and sending them to the device, but this device gets tested using programs D, E, and F."

The automated process using the tool has a shockingly low error rate. Without the tool, doing things manually, the error rate climbs to 1-2%. So unless everyone wanted to see terrifying headlines in the Boston Globe about their devices failing, GRH needed to fix the problem.

GRH was given the code, in this case a a zip file on a shared drive. It did not, at the start, even build. After fighting with the project configuration to resolve that, GRH was free to start digging in deeper.

Public Sub connect2PCdb()
        Dim cPath As String = Path.Combine(strConverterPath, "c.pfx")
        Dim strCN As String

        ' JES 12/6/2016: Modify the following line if MySQL server is changed to a different server.  A dump file will be needed to re-create teh database in the new server.
        strCN = "metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=MySql.Data.MySqlClient;provider connection string='server=REDACTED;user id=REDACTED;database=REDACTED;sslmode=Required;certificatepassword=REDACTED;certificatefile=REDACTED\c.pfx;password=REDACTED'"
        strCN = Regex.Replace(strCN, "certificatefile=.*?pfx", "certificatefile=" & cPath)
        pcContext = New Entities(strCN)
        strCN = "metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=MySql.Data.MySqlClient;provider connection string='server=REDACTED;user id=REDACTED;persistsecurityinfo=True;database=REDACTED;password=REDACTED'"
        strCN = Regex.Match(strCN, ".*'(.*)'").Groups(1).Value

        Try
            strCN = pcContext.Database.Connection.ConnectionString
            cnPC.ConnectionString = "server=REDACTED;user id=REDACTED;password=REDACTED;database=REDACTED;"
            cnPC.Open()
        Catch ex As Exception

        End Try
    End Sub

This is the code which connects to the backend database. The code is in the category of more of a trainwreck than a WTF. It's got a wonderful mix of nonsense in here, though- a hard-coded connection string which includes plaintext passwords, regex munging to modify the string, then hard-coding a string again, only to use regexes to extract a subset of the string. A subset we don't use.

And then, for a bonus, the whole thing has a misleading comment- "modify the following line" if we move to a different server? We have to modify several lines, because we keep copy/pasting the string around.

Oh, and of course, it uses the pattern of "open a database connection at application startup, and just hold that connection forever," which is a great way to strain your database as your userbase grows.

The good news about the hard-coded password is that it got GRH access to the database. With that, it was easy to see what the problem was: the database was full. The system was overly aggressive with logging, the logs went to database tables, the server was an antique with a rather small hard drive, and the database wasn't configured to even use all of that space anyway.

Cleaning up old logs got the engineers working again. GRH kept working on the code, though, cleaning it up and modernizing it. Updating to latest version of the .NET Core framework modified the data access to be far simpler, and got rid of the need for hard-coded connection strings. Still, GRH left the method looking like this:

    Public Sub connect2PCdb()
        'Dim cPath As String = Path.Combine(strConverterPath, "c.pfx")
        'Dim strCN As String

        ' JES 12/6/2016: Modify the following line if MySQL server is changed to a different server.  A dump file will be needed to re-create teh database in the new server.
        'strCN = "metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=MySql.Data.MySqlClient;provider connection string='server=REDACTED;user id=REDACTED;database=REDACTED;sslmode=Required;certificatepassword=REDACTED;certificatefile=REDACTED\c.pfx;password=REDACTED'"
        'strCN = Regex.Replace(strCN, "certificatefile=.*?pfx", "certificatefile=" & cPath)
        'pcContext = New Entities(strCN)
        'strCN = "metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=MySql.Data.MySqlClient;provider connection string='server=REDACTED;user id=REDACTED;persistsecurityinfo=True;database=REDACTED;password=REDACTED'"
        'strCN = Regex.Match(strCN, ".*'(.*)'").Groups(1).Value

        'GRH 2021-01-15.  Connection information moved to App.Config
        'GRH 2021-08-13.  EF Core no longer supports App.Config method
        pcContext = New PcEntities

        Try
            ' GRH 2021-08-21  This variable no longer exists in .NET 5
            'strCN = pcContext.Database.Connection.ConnectionString
            ' GRH 2021-08-20  Keeping the connection open causes EF Core to not work
            'cnPC.ConnectionString = "server=REDACTED;user id=REDACTED;password=REDACTED;database=REDACTED;SslMode=none"
            'cnPC.Open()
        Catch ex As Exception

        End Try
    End Sub

It's now a one-line method, with most of the code commented out, instead of removed. Why on Earth is the method left like that?

GRH explains:

Yes, I could delete the function as it is functionally dead, but I keep it for the same reasons that a hunter mounts a deer's head above her mantle.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Last Resort

Author: Julian Miles, Staff Writer Abby whips her wing-tentacles about, making little ‘cracks’ of delight as a gigantic silver dinosaur walks by, its crystal eyes filled with icy fire. Every footfall causes things to shake and drinks to splash about in their cups – unless they’re being carried on the spindly spider-legged copper tables that […]

The post The Last Resort appeared first on 365tomorrows.

,

David BrinScience as the ultimate accountability process

Before getting into Science as the ultimate accountability process, let me allow that I am biased in favor of this scientific era!  Especially after last weekend when Caltech - my alma mater - honored me - along with three far-more-deserving others - as Distinguished Alumnus.  Seems worth noting. Especially since it is one honor I truly never expected!


You  readers of Contrary Brin might be surprised that, with the crucial US election looming, I'm gonna step back from cliff-edge politics, to offer some Big Picture Perspective about how science works... and civilization, in general. 


But I think maybe perspective is kinda what we need, right now.



== How did we achieve the flawed miracle that we now have... and take too much for granted? ==


All the way back to our earliest records, civilization has faced a paramount problem. How can we maintain and improve a decent society amid our deeply human propensity for lies and delusion? 


As recommended by Pericles around 300 BCE… then later by Adam Smith and the founders of our era… humanity has only ever found one difficult but essential trick that actually works at freeing leaders and citizens to craft policy relatively - or partially - free from deception and falsehoods. 


That trick is NOT preaching or ‘don’t lie’ commandments. Sure, for 6000 years, top elites finger-wagged and passed laws against such stuff... only to become top liars and self-deceivers! Bringing calamities down upon the nations and peoples that they led.


Laws can help. But the truly ’essential trick’ that we’ve gradually become somewhat good-at is Reciprocal Accountability … freeing rival powers and even average citizens to keep an eye on each other laterally. Speaking up when we see what we perceive as lies or mistakes.


== How we've done this... a method under threat! ==

Yeah, sometimes it’s the critic who is wrong, and conventional wisdom can be right!  

Indeed, one of today's mad manias is to assume that experts - who spent their lives studying a topic closely - must be clueless compared to those who are 'informed' by Facebook memes and cable news rants.

Still, Criticism Is the Only Known Antidote to Error (CITOKATE!)...

...and one result of free speech criticism is a system that’s open enough to spot most errors – even those by the mighty – and criticize them (sometimes just in time and sometimes too late) so that many (never all!) of them get corrected. 

We aren’t yet great at it! Though better than all prior generations. And at the vanguard in this process is science.


== The horrible, ingrate reflex is NOT 'questioning authority' ==

Sure, scientists are human and subject to the same temptations to self-deceive or even tell lies. We who were trained in a scientific field (or two or three) were taught to recite the sacred catechism of science: “I might be wrong!” 


That core tenet – plus piles of statistical and error-checking techniques – made modern science different – and vastly more effective (and less hated) -- than all or any previous priesthoods. Still, we remain human. And delusion in science can have weighty consequences.


Which brings us to this article by Chris Said: "Scientific whistleblowers can be compensated for their service."  It begins with a paragraph that’s both true and also way exaggerates!  Still, the author poses a problem that needs an answer:


“Science has a fraud problem. Highly cited research is often based on faked data, which causes other researchers to pursue false leads. In medical research, the time wasted by followup studies can delay the discovery of effective treatments for serious diseases, potentially causing millions of lives to be lost.”


As I said: that’s an exaggeration – one that feeds into today’s Mad Right, in its all-out war vs. every fact-using profession. (Not just science, but also teaching, medicine and law and civil service... all the way to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.) 


Still, the essay is worth reading for its proposed solution. Which boils down to do more reciprocal accountability, only do it better!

The proposal would start with the fact that most scientists are competitive creatures! A
mong the most competitive that this planet ever produced – nothing like the lemming, paradigm-hugger stereotype spread by some on the far-left... and by almost everyone on today’s entire gone-mad right. 


Only this author proposes that we then augment that competitiveness with whistle blower rewards**, to incentivize the cross-checking process with cash prizes.

Hey, I'm all in favor! I’ve long pushed for stuff like this since my 1998 book The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? 


...and more recently my proposal for a FACT Act...


...and especially lately, suggesting incentives so that Artificial Intelligences will hold each other accountable (our only conceivable path to a ’soft AI landing.’) 


So, sure… the article is worth a look - and more discussion. 


Just watch it when yammerers attack science in general with the 'lemming' slander. Demand cash wagers over that one!



== A useful tech rule-of-thumb? ==


Do you know the “hype cycle curve”? That’s an observational/pragmatic correlation tool devised by Gartner in the 90s, for how new technologies often attract heaps of zealous attention, followed by a crash of disillusionment, when even the most promising techs encounter obstacles to implementation, and many just prove wrong. 


That trough is followed, in a few cases, by a more grounded rise in solid investment, as productivity takes hold. (It happened repeatedly with railroads and electricity and later with computers and the Internet and seems to be happening with AI.) The inimitable Sabine Hossenfelder offers a podcast about this, using recent battery tech developments as examples. 


Your takeaways: yes, it seems that some battery techs may deliver major good news pretty soon. And remember this ‘hype cycle’ thing is correlative, not causative. It has almost no predictive utility in individual cases.


But the final take-away is also important. That progress is being made! Across many fronts and very rapidly. And every single thing you are being told by the remnant denialist cult about the general trend toward sustainable technologies is a damned lie.


Take this jpeg I just copied from the newsletter of Peter Diamandis, re: the rapidly maturing tech of perovskite based solar cells, which have a theoretically possible efficiency of 66%, double that of silicon. (And many of you first saw the word “perovskite” in my novel Earth, wherein I pointed out that most high-temp superconductors take that mineral form… and so does most of the Earth’s mantle. Put those two together!)


Do subscribe to Peter’s Abundance Newsletter, as an antidote to the gloom that’s spread by today’s entire gone-mad-right and by much of today’s dour, farthest-fringe-left. 


The latter are counter-productive sanctimony junkies, irritating but statistically unimportant as we make progress without much help from them.


The former are a monstrously insane, science-hating treason-cult that’s potentially lethal to our civilization and world and our children. And for those mouth-foaming neighbors of ours, the only cure will be victory – yet again, and with malice toward none – by the Union side in this latest phase of our recurring confederate fever. 


======


** The 1986 Whistle Blower law, enticing tattle-tales with up to 30% cuts of any $$ recovered by the US taxpayers, has just been gutted by a Trump appointed (and ABA 'not-qualified') judge. Gee, I wonder why?



Planet DebianEnrico Zini: Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.

Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:

from unittest import mock

default = 42


def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works nicely as expected:

$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None

It lacks functools.wraps and typing, though. Let's add them.

Adding functools.wraps

Adding a simple @functools.wraps, mock unexpectedly stops working:

# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'

This is the new code, with explanations and a fix:

# Introduce functools
import functools
from unittest import mock

default = 42


def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    # Fix:
    # del wrapped.__wrapped__

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()

Adding typing

For simplicity, from now on let's change Fiddle.print to match its wrapped signature:

      # Give up with making value not optional, to simplify things :(
      def print(self, value: int | None = None) -> None:
          assert value is not None
          print("Answer:", value)

Typing with ParamSpec

# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock

default = 42

P = ParamSpec("P")


def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)

mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:

test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"

Typing with Callable

We can use explicit Callable argument lists:

# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock

default = 42

A = TypeVar("A")


# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int | None], None]) -> Callable[[A, int | None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:

test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]

typing's documentation says:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:

Let's do that!

Typing with Protocol, take 1

# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)


class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int | None = None) -> None:
        ...


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

New mypy complaints:

test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]

What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly.

Typing with Protocol, take 2

So... we add the function descriptor methos to our Protocol!

A lot of this is taken from this discussion.

# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)

# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040


class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""

    def __call__(_, value: int | None = None) -> None:
        """Bound signature."""


class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""

    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)

    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A] | None = None  # noqa: F841
    ) -> BoundPrinter:
        ...

    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A] | None = None  # noqa: F841
    ) -> "Printer[A]":
        ...

    def __get__(
        self, obj: A | None, objtype: type[A] | None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""

    def __call__(_, self: A, value: int | None = None) -> None:
        """Unbound signature."""


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works! It's typed! And mypy is happy!

365 TomorrowsBook Mouse

Author: Brooks C. Mendell “Where is she?” asked Dr. Nemur, holding her glasses in place while looking under a chair. “Relax, Doc,” said Burt. “It’s only a mouse. We’ll find her.” “Only a mouse?” said Nemur. “Her frontal cortex packs more punch than your bird brain.” “I get it,” said Burt. “I’m not your type.” […]

The post Book Mouse appeared first on 365tomorrows.

Cory DoctorowSpill, part three (a Little Brother story)

Will Staehle's cover for 'Spill': a white star on an aqua background; a black stylized fist rises out of the star with a red X over its center.

This week on my podcast, I read part three of “Spill“, a new Little Brother story commissioned by Clay F Carlson and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original.

I didn’t plan to go to Oklahoma, but I went to Oklahoma.

My day job is providing phone tech support to people in offices who use my boss’s customer-relationship management software. In theory, I can do that job from anywhere I can sit quietly on a good Internet connection for a few hours a day while I’m on shift. It’s a good job for an organizer, because it means I can go out in the field and still pay my rent, so long as I can park a rental car outside of a Starbucks, camp on their WiFi, and put on a noise-canceling headset. It’s also good organizer training because most of the people who call me are angry and confused and need to have something difficult and technical explained to them.

My comrades started leaving for Oklahoma the day the Water Protector camp got set up. A lot of them—especially my Indigenous friends—were veterans of the Line 3 Pipeline, the Dakota Access Pipeline, and other pipeline fights, and they were plugged right into that network.

The worse things got, the more people I knew in OK. My weekly affinity group meeting normally had twenty people at it. One week there were only ten of us. The next week, three. The next week, we did it on Zoom (ugh) and most of the people on the line were in OK, up on “Facebook Hill,” the one place in the camp with reliable cellular data signals.


MP3

Cory DoctorowSpill, part two (a Little Brother story)

Will Staehle's cover for 'Spill': a white star on an aqua background; a black stylized fist rises out of the star with a red X over its center.

This week on my podcast, I read part two of “Spill“, a new Little Brother story commissioned by Clay F Carlson and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original.

I didn’t plan to go to Oklahoma, but I went to Oklahoma.

My day job is providing phone tech support to people in offices who use my boss’s customer-relationship management software. In theory, I can do that job from anywhere I can sit quietly on a good Internet connection for a few hours a day while I’m on shift. It’s a good job for an organizer, because it means I can go out in the field and still pay my rent, so long as I can park a rental car outside of a Starbucks, camp on their WiFi, and put on a noise-canceling headset. It’s also good organizer training because most of the people who call me are angry and confused and need to have something difficult and technical explained to them.

My comrades started leaving for Oklahoma the day the Water Protector camp got set up. A lot of them—especially my Indigenous friends—were veterans of the Line 3 Pipeline, the Dakota Access Pipeline, and other pipeline fights, and they were plugged right into that network.

The worse things got, the more people I knew in OK. My weekly affinity group meeting normally had twenty people at it. One week there were only ten of us. The next week, three. The next week, we did it on Zoom (ugh) and most of the people on the line were in OK, up on “Facebook Hill,” the one place in the camp with reliable cellular data signals.


MP3

,

Planet DebianSteve McIntyre: Mini-Debconf in Cambridge, October 10-13 2024

Group photo

Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!

Cakes

Hacking together

minicamp

For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.

Sessions and talks

Secure Boot talk

Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)

Video team awesomeness

Video team in action

Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.

A great time for all

Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!

Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!

METhe CUPS Vulnerability

The Announcement

Late last month there was an announcement of a “severity 9.9 vulnerability” allowing remote code execution that affects “all GNU/Linux systems (plus others)” [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and “And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix”.

He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn’t a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn’t be at any risk.

Analysis

Not All Linux Systems Run CUPS

When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as “all GNU/Linux systems (plus others)” seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work.

For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn’t installed on any of my systems. Also the vast majority of my systems don’t do printing and therefore don’t have any part of CUPS installed.

CUPS vs NAT

The next issue is that in Australia most home ISPs don’t have IPv6 enabled and CUPS doesn’t do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both TCP and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don’t use a router that runs uPnP.

The only program I knowingly run that uses uPnP is Warzone2100 and as I don’t play network games that doesn’t happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won’t be an immediate win for an attacker (exploits via X11 or Wayland are another issue).

MAC Systems

Debian has had AppArmor enabled by default since Buster was released in 2019 [5]. There are claims that AppArmor will stop this exploit from doing anything bad.

To check SE Linux access I first use the “semanage fcontext” command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors – not access that can be harmful.

The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable.

So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it’s apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.

# semanage fcontext -l|grep bin/cups-browsed
/usr/bin/cups-browsed                              regular file       system_u:object_r:cupsd_exec_t:s0 
# sesearch -A -s cupsd_t -c file -p write
allow cupsd_t cupsd_interface_t:file { append create execute execute_no_trans getattr ioctl link lock map open read rename setattr unlink write };
allow cupsd_t cupsd_lock_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_log_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_runtime_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_rw_etc_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_tmp_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t faillog_t:file { append getattr ioctl lock open read write };
allow cupsd_t init_tmpfs_t:file { append getattr ioctl lock read write };
allow cupsd_t krb5_host_rcache_t:file { append create getattr ioctl link lock open read rename setattr unlink write }; [ allow_kerberos ]:True
allow cupsd_t print_spool_t:file { append create getattr ioctl link lock open read relabelfrom relabelto rename setattr unlink write };
allow cupsd_t samba_var_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write }; [ allow_kerberos ]:True
allow cupsd_t usbfs_t:file { append getattr ioctl lock open read write };
# sesearch -A -s cupsd_t -c security
allow cupsd_t security_t:security check_context; [ allow_kerberos ]:True
allow cupsd_t security_t:security { check_context compute_av };
# sesearch -A -s cupsd_t -c capability
allow cupsd_t cupsd_t:capability net_bind_service; [ allow_ypbind ]:True
allow cupsd_t cupsd_t:capability { audit_write chown dac_override dac_read_search fowner fsetid ipc_lock kill net_bind_service setgid setuid sys_admin sys_rawio sys_resource sys_tty_config };
# sesearch -A -s cupsd_t -c capability2
allow cupsd_t cupsd_t:capability2 { block_suspend wake_alarm };
# sesearch -A -s cupsd_t -c blk_file

Conclusion

This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it’s misleading at best?

365 TomorrowsThe Tower

Author: Mark Renney The island is getting smaller, but those who reside in the Tower are in denial. Hiding behind the steel rafters and columns and the reinforced sheets of glass that comprise the walls of their homes, they won’t accept that a very real danger lurks beyond their windows. The occupants of the Tower, […]

The post The Tower appeared first on 365tomorrows.

,

Cryptogram Watermark for LLM-Generated Text

Researchers at Google have developed a watermark for LLM-generated text. The basics are pretty obvious: the LLM chooses between tokens partly based on a cryptographic key, and someone with knowledge of the key can detect those choices. What makes this hard is (1) how much text is required for the watermark to work, and (2) how robust the watermark is to post-generation editing. Google’s version looks pretty good: it’s detectable in text as small as 200 tokens.

Worse Than FailureError'd: What Goes Around

No obvious pattern fell out of last week's submissions for Error'd, but I did especially like Caleb Su's example.

Michael R. , apparently still job hunting, reports "I have signed up to outlier.ai to make some $$$ on the side. No instructions necessary."

0

 

Peter G. repeats a recurring theme of lost packages, saying "(Insert obligatory snark about Americans and geography. No, New Zealand isn't located in Washington DC)." A very odd coincidence, since neither the lat/long nor the zip code are particularly interesting.

1

 

"The Past Is Mutable," declares Caleb Su , explaining "In the race to compete with Gmail feature scheduling emails to send in the *future*, Outlook now lets you send emails in the past! Clearly, someone at Microsoft deserves a Nobel Prize for defying the basic laws of unidirectional time." That's thinking different.

2

 

Explorer xOneca explains this snapshot: "Was going to watch a Youtube video in DuckDuckGo, and while diagnosing why it wasn't playing I found this. It seems that youtube-nocookie.com actually *sets* cookies..?"

3

 

Morgan either found or made a funny. But it is a funny. "Now when I think about it I do like Option 3 more…" I rate this question a 👎

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe Other SETI

Author: David Barber This was back in 1937, in Wheaton, Illinois, where Grote Reber built a radio telescope to track down persistent background noise that was annoying Bell Telephone Labs. The Depression still lingered and Bell wouldn’t employ him, but in his spare time Reber built a 30-foot dish in his mother’s back yard and […]

The post The Other SETI appeared first on 365tomorrows.