Planet Russell

,

Krebs on SecurityTask Force Seeks to Disrupt Ransomware Payments

Some of the world’s top tech firms are backing a new industry task force focused on disrupting cybercriminal ransomware gangs by limiting their ability to get paid, and targeting the individuals and finances of the organized thieves behind these crimes.

In a 81-page report delivered to the Biden administration this week, top executives from Amazon, Cisco, FireEye, McAfee, Microsoft and dozens of other firms joined the U.S. Department of Justice (DOJ), Europol and the U.K. National Crime Agency in calling for an international coalition to combat ransomware criminals, and for a global network of ransomware investigation hubs.

The Ransomware Task Force urged the White House to make finding, frustrating and apprehending ransomware crooks a priority within the U.S. intelligence community, and to designate the current scourge of digital extortion as a national security threat.

The Wall Street Journal recently broke the news that the DOJ was forming its own task force to deal with the “root causes” of ransomware. An internal DOJ memo reportedly “calls for developing a strategy that targets the entire criminal ecosystem around ransomware, including prosecutions, disruptions of ongoing attacks and curbs on services that support the attacks, such as online forums that advertise the sale of ransomware or hosting services that facilitate ransomware campaigns.”

According to security firm Emsisoft, almost 2,400 U.S.-based governments, healthcare facilities and schools were victims of ransomware in 2020.

“The costs of ransomware go far beyond the ransom payments themselves,” the task force report observes. “Cybercrime is typically seen as a white-collar crime, but while ransomware is profit-driven and ‘non-violent’ in the traditional sense, that has not stopped ransomware attackers from routinely imperiling lives.”

A proposed framework for a public-private operational ransomware campaign. Image: IST.

It is difficult to gauge the true cost and size of the ransomware problem because many victims never come forward to report the crimes. As such, a number of the task force’s recommendations focus on ways to encourage more victims to report the crimes to their national authorities, such as requiring victims and incident response firms who pay a ransomware demand to report the matter to law enforcement and possibly regulators at the U.S. Treasury Department.

Last year, Treasury issued a controversial memo warning that ransomware victims who end up sending digital payments to people already being sanctioned by the U.S. government for money laundering and other illegal activities could result in hefty fines.

Philip Reiner, CEO of the Institute for Security and Technology and executive director of the industry task force, said the reporting recommendations are one of several areas where federal agencies will likely need to dedicate more employees. For example, he said, expecting victims to clear ransomware payments with the Treasury Department first assumes the agency has the staff to respond in any kind of timeframe that might be useful for a victim undergoing a ransomware attack.

“That’s why we were so dead set in putting forward comprehensive framework,” Reiner said. “That way, Department of Homeland Security can do what they need to do, the State Department, Treasury gets involved, and it all needs to be synchronized for going after the bad guys with the same alacrity.”

Some have argued that making it illegal to pay a ransom is one way to decrease the number of victims who acquiesce to their tormentors’ demands. But the task force report says we’re nowhere near ready for that yet.

“Ransomware attackers require little risk or effort to launch attacks, so a prohibition on ransom payments would not necessarily lead them to move into other areas,” the report observes. “Rather, they would likely continue to mount attacks and test the resolve of both victim organizations and their regulatory authorities. To apply additional pressure, they would target organizations considered more essential to society, such as healthcare providers, local governments, and other custodians of critical infrastructure.”

“As such, any intent to prohibit payments must first consider how to build organizational cybersecurity maturity, and how to provide an appropriate backstop to enable organizations to weather the initial period of extreme testing,” the authors concluded in the report. “Ideally, such an approach would also be coordinated internationally to avoid giving ransomware attackers other avenues to pursue.”

The task force’s report comes as federal agencies have been under increased pressure to respond to a series of ransomware attacks that were mass-deployed as attackers began exploiting four zero-day vulnerabilities in Microsoft Exchange Server email products to install malicious backdoors. Earlier this month, the DOJ announced the FBI had conducted a first-of-its-kind operation to remove those backdoors from hundreds of Exchange servers at state and local government facilities.

Many of the recommendations in the Ransomware Task Force report are what you might expect, such as encouraging voluntary information sharing on ransomware attacks; launching public awareness campaigns on ransomware threats; exerting pressure on countries that operate as safe havens for ransomware operators; and incentivizing the adoption of security best practices through tax breaks.

A few of the more interesting recommendations (at least to me) included:

-Limit legal liability for ISPs that act in good faith trying to help clients secure their systems.

-Create a federal “cyber response and recovery fund” to help state and local governments or critical infrastructure companies respond to ransomware attacks.

-Require cryptocurrency exchanges to follow the same “know your customer” (KYC) and anti-money laundering rules as financial institutions, and aggressively targeting exchanges that do not.

-Have insurance companies measure and assert their aggregated ransomware losses and establish a common “war chest” subrogation fund “to evaluate and pursue strategies aimed at restitution, recovery, or civil asset seizures, on behalf of victims and in conjunction with law enforcement efforts.”

-Centralize expertise in cryptocurrency seizure, and scaling criminal seizure processes.

-Create a standard format for reporting ransomware incidents.

-Establish a ransomware incident response network.

Worse Than FailureCodeSOD: Secure By Design

Many years ago, I worked for a company that mandated that information like user credentials should never be stored "as plain text". It had to be "encoded". One of the internally-developed HR applications interpreted this as "base64 is a kind of encoding", and stored usernames and passwords in base64 encoding.

Steven recently encountered a… similar situation. Specifically, his company upgraded their ERP system, and reports that used to output taxpayer ID numbers now outputs ~201~201~210~203~… or similar values. He checked the data dictionary for the application, and saw that the taxpayer_id field stored "encrypted" values. Clearly, this data isn't really encrypted.

Steven didn't have access to the front-end code that "decrypted" this data. The reports were written in SSRS, which allows Visual Basic to script extensions. So, with an understanding of what taxpayer IDs should look like, Steven was able to "fix" the reports by adding this function:

public function ConvertTaxID(tax_id as string) as string dim splitchar as char = "~" dim splits() as string splits = tax_id.split(splitchar) dim i as integer for i = splits.length-1 to 0 step -1 if isnumeric(splits(i)) then ConvertTaxID = ConvertTaxID & CHR(splits(i) - 125) end if next i end function

We can now understand the "encryption" algorithm by understanding the decryption.

~ acts as a character separator, and each character is stored as its numeric ASCII representation, with a value added to it, which Steven undoes by subtracting the same value. To make this basic shift cypher more "secure", it's also reversed.

Steven adds:

Normally, this software is pretty solid, but this was one case where I was left wondering who got encryption advice from their 6 year old…

Sure, this is certainly an elementary school level encryption algorithm, but could a six year old have reverse engineered it? Of course not! So this is very secure, if your attacker is a six year old.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianJunichi Uekawa: Setting wake-on-lan in Debian way.

Setting wake-on-lan in Debian way. There's several ways that your network interfaces can be configured. The Debian way is to use ifup/ifdown. Make sure your network is configured with it by checking ifquery. nmcli d and networkctl list are NM and systemd equivalents of commands. After you know which one is managing your device you can go ahead and set up WoL configuration appropriately. Default Debian installation would probably start with a ifup/ifdown config.

Planet DebianNorbert Preining: In memoriam of Areeb Jamal

We lost one of our friends and core developers of the FOSSASIA community. An extremely sad day.

We miss you.

Planet DebianJunichi Uekawa: Surrounding a region with Emacs lisp.

Surrounding a region with Emacs lisp. I wanted to surround a region with HTML tags and here's what I learnt today. Specifying "r" in interactive gives two numbers, begin and end. When I want to obtain multiple kinds of values in interactive, I can use newlines to delimit. set-marker is an api to keep a marker at relative position even after edits, API needs making make-marker to create an empty marker first, and seems like the number of markers affects editing speed so the API is made to allow reuse of markers. After I got these going I could now write.

,

Krebs on SecurityExperian API Exposed Credit Scores of Most Americans

Big-three consumer credit bureau Experian just fixed a weakness with a partner website that let anyone look up the credit score of tens of millions of Americans just by supplying their name and mailing address, KrebsOnSecurity has learned. Experian says it has plugged the data leak, but the researcher who reported the finding says he fears the same weakness may be present at countless other lending websites that work with the credit bureau.

Bill Demirkapi, an independent security researcher who’s currently a sophomore at the Rochester Institute of Technology, said he discovered the data exposure while shopping around for student loan vendors online.

Demirkapi encountered one lender’s site that offered to check his loan eligibility by entering his name, address and date of birth. Peering at the code behind this lookup page, he was able to see it invoked an Experian Application Programming Interface or API — a capability that allows lenders to automate queries for FICO credit scores from the credit bureau.

“No one should be able to perform an Experian credit check with only publicly available information,” Demirkapi said. “Experian should mandate non-public information for promotional inquiries, otherwise an attacker who found a single vulnerability in a vendor could easily abuse Experian’s system.”

Demirkapi found the Experian API could be accessed directly without any sort of authentication, and that entering all zeros in the “date of birth” field let him then pull a person’s credit score. He even built a handy command-line tool to automate the lookups, which he dubbed “Bill’s Cool Credit Score Lookup Utility.”

Demirkapi’s Experian credit score lookup tool.

KrebsOnSecurity put that tool to the test, asking permission from a friend to have Demirkapi look up their credit score. The friend agreed and said he would pull his score from Experian (at this point I hadn’t told him that Experian was involved). The score he provided matched the score returned by Demirkapi’s lookup tool.

In addition to credit scores, the Experian API returns for each consumer up to four “risk factors,” indicators that might help explain why a person’s score is not higher.

For example, in my friend’s case Bill’s tool said his mid-700s score could be better if the proportion of balances to credit limits was lower, and if he didn’t owe so much on revolving credit accounts.

“Too many consumer finance company accounts,” the API concluded about my friend’s score.

The reason I could not test Demirkapi’s findings on my own credit score is that we have a security freeze on our files at the three major consumer credit reporting bureaus, and a freeze blocks this particular API from pulling the information.

Demirkapi declined to share with Experian the name of the lender or the website where the API was exposed. He refused because he said he suspects there may be hundreds or even thousands of companies using the same API, and that many of those lenders could be similarly leaking access to Experian’s consumer data.

“If we let them know about the specific endpoint, they can just ban/work with the loan vendor to block these requests on this one case, which doesn’t fix the systemic problem,” he explained.

Nevertheless, after being contacted by this reporter Experian figured out on its own which lender was exposing their API; Demirkapi said that vendor’s site now indicates the API access has been disabled.

“We have been able to confirm a single instance of where this situation has occurred and have taken steps to alert our partner and resolve the matter,” Experian said in a written statement. “While the situation did not implicate or compromise any of Experian’s systems, we take this matter very seriously. Data security has always been, and always will be, our highest priority.”

Demirkapi said he’s disappointed that Experian did exactly what he feared they would do.

“They found one endpoint I was using and sent it into maintenance mode,” he said. “But this doesn’t address the systemic issue at all.”

Leaky and poorly-secured APIs like the one Demirkapi found are the source of much mischief in the hands of identity thieves. Earlier this month, auto insurance giant Geico disclosed that fraudsters abused a bug in its site to steal drivers license numbers from Americans.

Geico said the data was used by thieves involved in fraudulently applying for unemployment insurance benefits. Many states now require drivers license numbers as a way of verifying an applicant’s identity.

In 2013, KrebsOnSecurity broke the news about an identity theft service in the underground that programmatically pulled sensitive consumer credit data directly from a subsidiary of Experian. That service was run by a Vietnamese hacker who’d told the Experian subsidiary he was a private investigator. The U.S. Secret Service later said the ID theft service “caused more material financial harm to more Americans than any other.”

Additional reading: Experian’s Credit Freeze Security is Still a Joke (Apr. 27, 2021)

Cryptogram Second Click Here to Kill Everybody Sale

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

I have 600 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

Please be patient on delivery. It’s a lot of work to sign and mail hundreds of books. I try to do some each day, but sometimes I can’t. And the pandemic can cause mail slowdowns all over the world.

Cryptogram Identifying People Through Lack of Cell Phone Use

In this entertaining story of French serial criminal Rédoine Faïd and his jailbreaking ways, there’s this bit about cell phone surveillance:

After Faïd’s helicopter breakout, 3,000 police officers took part in the manhunt. According to the 2019 documentary La Traque de Rédoine Faïd, detective units scoured records of cell phones used during his escape, isolating a handful of numbers active at the time that went silent shortly thereafter.

Planet DebianJonathan McDowell: DeskPi Pro update

I wrote previously about my DeskPi Pro + 8GB Pi 4 setup. My main complaint at the time was the fact one of the forward facing USB ports broke off early on in my testing. For day to day use that hasn’t been a problem, but it did mar the whole experience. Last week I received an unexpected email telling me “The new updated PCB Board for your DeskPi order was shipped.”. Apparently this was due to problems with identifying SSDs and WiFi/HDMI issues. I wasn’t quite sure how much of the internals they’d be replacing, so I was pleasantly surprised when it turned out to be most of them; including the PCB with the broken USB port on my device.

DeskPi Pro replacement PCB

They also provided a set of feet allowing for vertical mounting of the device, which was a nice touch.

The USB/SATA bridge chip in use has changed; the original was:

usb 2-1: New USB device found, idVendor=152d, idProduct=0562, bcdDevice= 1.09
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: RPi_SSD
usb 2-1: Manufacturer: 52Pi
usb 2-1: SerialNumber: DD5641988389F

and the new one is:

usb 2-1: New USB device found, idVendor=174c, idProduct=1153, bcdDevice= 0.01
usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1
usb 2-1: Product: AS2115
usb 2-1: Manufacturer: ASMedia
usb 2-1: SerialNumber: 00000000000000000000

That’s a move from a JMicron 6Gb/s bridge to an ASMedia 3Gb/s bridge. It seems there are compatibility issues with the JMicron that mean the downgrade is the preferred choice. I haven’t retried the original SSD I wanted to use (that wasn’t detected), but I did wonder if this might have resolved that issue too.

Replacing the PCB was easier than the original install; everything was provided pre-assembled and I just had to unscrew the Pi4 and slot it out, then screw it into the new PCB assembly. Everything booted fine without the need for any configuration tweaks. Nice and dull. I’ve tried plugging things into the new USB ports and they seem ok so far as well.

However I also then ended up pulling in a new backports kernel from Debian (upgrading from 5.9 to 5.10) which resulted in a failure to boot. The kernel and initramfs were loaded fine, but no login prompt ever appeared. Some digging led to the discovery that a change in boot ordering meant USB was not being enabled. The solution is to add reset_raspberrypi to the /etc/initramfs-tools/modules file - that way this module is available in the initramfs, the appropriate pre-USB reset can happen and everything works just fine again.

The other niggle with the new kernel was a regular set of errors in the kernel log:

mmc1: Timeout waiting for hardware cmd interrupt.
mmc1: sdhci: ============ SDHCI REGISTER DUMP ===========

and a set of registers afterwards, roughly every 10s or so. This seems to be fallout from an increase in the core clock due to the VC4 driver now being enabled, the fact I have no SD card in the device and a lack of working card-detect line for the MicroSD slot. There’s a GitHub issue but I solved it by removing the sdhci_iproc for now - I’m not using the wifi so loss of MMC isn’t a problem.

Credit to DeskPi for how they handled this. I didn’t have to do anything and didn’t even realise anything was happening until I got the email with my tracking number and a description of what they were sending out in it. Delivery took less than a week. This is a great example of how to handle a product issue - no effort required on the part of the customer.

LongNowStewart Brand and Brian Eno on “We Are As Gods”

In March 02021, We Are As Gods, the documentary about Long Now co-founder Stewart Brand, premiered at SXSW. As part of the premiere, the documentary’s directors, David Alvarado and Jason Sussberg, hosted a conversation between Brand and fellow Long Now co-founder Brian Eno. (Eno scored the film, contributing 24 original tracks to the soundtrack.) The full conversation can be watched above. A transcript follows below.  

David Alvarado: Hi. My name is David Alvarado. I’m one of the directors for a new documentary film called We Are as Gods. This is a documentary feature that explores the extraordinary life of a radical thinker, environmentalist, and controversial technologist, Stewart Brand. This is a story that marries psychedelia, counterculture, futurism. It’s an unexpected journey of a complicated American polymath at the vanguard of our culture.

Today, we’re having a conversation with the subject of the film himself, Stewart Brand, and Brian Eno.

Jason Sussberg: Okay. In the unlikely event that you don’t know either of our two speakers, allow me to introduce them. First off, we have Brian Eno, who’s a musician, a producer, a visual artist and an activist. He is the founding member of the Long Now Foundation, along with Stewart Brand. He’s a musician of multiple albums, solo and collaborative. His latest album is called Film Music 1976-2020, which was released a few months ago, and we are lucky bastards because it includes a song from our film, We Are as Gods, called “A Reasonable Question.”

Stewart Brand, he is the subject of our documentary. Somewhere, long ago, I read a description of Stewart saying that he was “a finder and a founder,” which I think is a really apt way to talk about him. He finds tools, peoples, and ideas, and blends them together. He founded or co-founded Revive and Restore, The Long Now Foundation, The WELL, Global Business Network, and the Whole Earth Catalog and all of its offshoots. He is an author of multiple books, and he’s currently working on a new book called Maintenance. He’s a trained ecologist at Stanford and served as an infantry officer in the Army. I will let Stewart and Brian take it from here.

Stewart Brand: Brian, what a pleasure to be talking to you. I just love this.

Brian Eno: Yes.

Stewart Brand: You and I go back a long way. I was a fan before I was a friend, and so I continue to be a fan. I’m a fan of the music that you added to this film. I’m curious about particularly the one that is in your new album, Film Music. What’s it called…”[A] Reasonable Question.” Tell me what you remember about that piece, and I want to ask the makers of the film here what it was like from their end.

Jason Sussberg: We can play it for our audience now.

David Alvarado: You originally titled it “Why Does Music Like This Exist?”

Brian Eno: The reason it had that original title, “Why Does Music Like This Even Exist?”, was because it was one of those nights when I was in a mood of complete desperation, and thinking, “What am I doing? Is it of any use whatsoever?” I’ve learned to completely distrust my moods when I’m working on music. I could think something is fantastic, and then realize a few months later that it’s terrible, and vice versa. So what I do is I routinely mix everything that I ever work on, because I just don’t trust my judgment at the moment of working on it. That piece, the desperation I felt about it is reflected in the original title, “Why Does Music Like This Even Exist?” I was thinking, “God, this is so uninteresting. I’ve done this kind of thing a thousand times before.”

In fact, it was only when we started looking for pieces for this film…the way I look for things is just by putting my archive on random shuffle, and then doing the cleaning or washing up or tidying up books or things like that. So I just hear pieces appear. I often don’t remember them at first. I don’t remember when I did them. Anyway, this piece came up. I thought, “Oh. That’s quite a good piece.”

David Alvarado: I mean, that’s so brilliant because it’s actually… We weren’t involved, obviously, in choosing what music tracks you wanted to use for your 1976 to 2020 film album, and so you chose that one, the very one that you weren’t liking at the beginning. That’s just incredible.

Brian Eno: Yes. Well, this has happened now so many times that I think one’s judgment at the time of working has very little to do with the quality of what you’re making. It’s just to do with your mood at that moment.

Stewart Brand: So in this case, Brian, that piece is kind of joyous and exciting to hear. These guys put it in a part of the film where I’m at my best, I’m actually part of a real frontier happening. This must be a first for you, in a sense, you’re not only scoring the film, you’re in the film. This piece of film, I now realize as we listened to it, then cuts into you talking about me, but not about the music. You had no idea when they were interviewing you it was going to be overlaid on this. I sort of have to applaud these guys for not getting cute there and drowning you out with your own music there or something. “Yeah, well, he is chatting on, but let’s listen to the music.” But nevertheless, it really works in there. Do you like how it worked out in the film?

Brian Eno: Yes. Yes, I do. I like that, and quite a few of the other pieces appeared probably in places that I wouldn’t have imagined putting them, actually. This, I think, is one of the exciting things about doing film music, that you hear the music differently when you see it placed in a context. Just like music can modify a film, the film can modify the music as well. So sometimes you see the music and you think, “Oh, yes. They’ve spotted a feeling in that that I didn’t, or I hadn’t articulated anyway, I wasn’t aware of, perhaps.”

Stewart Brand: You’ve done a lot of, and the album shows it, you’ve done a lot of music for film. Are there sort of rules in your mind of how you do that? It’s different than ambient music, I guess, but there must be sort of criteria of, “Oh yeah, this is for a film, therefore X.” Are there things that you don’t do in film music?

Brian Eno: Yes. I’ll tell you what the relationship is with ambient music. Both ambient music and most of the film music I make deliberately leaves a space where somebody else might fill that space in with a lead instrument or something that is telling a story, something narrative, if you like. Even if it’s instrumental, it can still be narrative in the sense that you get the idea that this thing is the central element, which is having the adventure, and the rest is a sort of support structure to that or a landscape for that.

So what I realized, one of the things I liked about film music was that you very often just got landscape, which wasn’t populated, because the film is meant to be the thing that populates the landscape, if you like. I started listening to film music probably in the late ’60s, and it was Italian, like Nino Rota and Ennio Morricone and those kinds of people, who were writing very, very atmospheric music, which sort of lacked a central presence. I like that hole that was left, because I found the hole very inviting. It kind of says, “Come on, you be the adventurer. You, the listener, you’re in this landscape, what’s happening to you?” It’s a deliberate incompleteness, in a way, or an unfinishedness that that music has. I think that was part of the idea of ambient music as well, to try to make something that didn’t try to fix your attention, to hold it and keep it in one place, that deliberately allowed it to wander around and have a look around. So this happens to be a good formula for film music.

I really started making film music in a strange way. I used to, when I was working on my early song albums, sometimes at the end of the day I’d have half an hour left and I’d have a track up on a multi-track tape, with all the different instruments, and I’d say to the engineer, “Let’s make the film music version now.” And what that normally meant was take out the main instruments, the voice, particularly the voice, and then other things that were sort of leading the piece. Take those all out, slow the tape down, often, to half speed, and see what we can do with what’s left. Actually, I often found those parts of the day more exciting than the rest of the day, when suddenly something came into existence that nobody had ever thought about before. That was sort of how I started making film music.

So I had collected up a lot of pieces like that, and I thought, “Do you know what, I should send these to film directors. They might find a use for these.” And indeed they did. So that’s how it started, really.

Stewart Brand: So you initiated that, the filmmakers did not come to you.

Brian Eno: No. I had been approached only once before. Actually, before I ever made any albums I’d been approached by a filmmaker to do a piece of music for him, but other than that, no, I didn’t have any approaches. I sort of got the ball rolling by saying, “Look, I’m doing this kind of music, and I think it would be good for films.” So I released an album which was called Music for Films, though in fact none of the music had been in films. It was a sort of proposal: this is music that could be in films. I just left out the could be.

Stewart Brand: You are a very good marketer of your product, I must say. That’s just neat. So from graphic designers, the idea of figure-ground, and sometimes they flip and things like that, that’s all very interesting. It sounds like in a way this is music which is all ground, but invites a figure.

Brian Eno: Yes, yes.

Stewart Brand: You’re a graphic artist originally, is that right?

Brian Eno: Well, I was trained as a fine artist, actually. I was trained as a painter. Well, when I say I was trained, I went to an art school which claimed it was teaching a fine art course, so I did painting and sculpture. But actually I did as much music there as I did visual art as well.

Stewart Brand: So it’s an art school, and you were doing music. Were other people in that school doing music at that time, or is that unique to you?

Brian Eno: No, that was in the ’60s. The art schools were the crucible of a lot of what happened in pop music at that time. And funnily enough, also the art schools were where experimental composers would find an audience. The music schools were absolutely uninterested in them. Music schools were very, very academic at that time. People had just started, I was one of the pioneers of this, I suppose, had just started making music in studios. So instead of sitting down with a guitar and writing something and then going into the studio to record it, people like me were going into studios to make something using the possibilities of that place, something that you couldn’t have made otherwise. You wouldn’t come up with a guitar or a piano. A sort of whole new era of music came out of that, really. But it really came out of this possibility of multi-track recording.

Stewart Brand: So this is pre-digital? You’re basically working with the tapes and mixing tapes, or what?

Brian Eno: This was late ’60s, early ’70s. What had happened was that until about 01968, the maximum number of tracks you had was four tracks. I think people went four-track in 01968. I think the last Beatles album was done on four track, which was considered incredibly luxurious. What that meant, four tracks, was that you could do something on one track, something on another, mix them down to one track so you still got one track and then three others left, then you could kind of build things up slowly and carefully.

Over time, so, it meant something different musically, because it separated music from performance. It made music much more like painting, in that you could add something one day and take it off the next day, add something else. The act of making music extended in time like the act of painting does. You didn’t have to just walk in front of the canvas and do it all in one go, which was how music had previously been recorded. That meant that recording studios were something that painting students immediately understood, because they understood that process. But music students didn’t. They still thought it had to be about performance. In fact, there was a lot of resistance from musicians in general, because they thought that it was cheating, it wasn’t fair you were doing these things. You couldn’t actually play them. Of course, I thought, “Well, who cares? It doesn’t really matter, does it? What matters is what comes out at the end.”

Stewart Brand: Well, I was doing a little bit of music, well, sort of background stuff or putting together things for art installations at that time, and what I well remember is fucking razor blade, where you’re cutting the tape and splicing it, doing all these things. It was pretty raw. But of course, the film guys are going through the same stuff at that time. They were with their razor blade equivalents, cutting and splicing and whatnotting. So digital has just exploded the range of possibilities, which I think I’ve heard some of your theory that exploded them too far, and you’re always looking for ways to restrain your possibilities when you’re composing. Is that right?

Brian Eno: Yes. Well, I suppose it’s a problem that everybody has now, when you think about it. Now, we’re all faced with a whole universe of rabbit holes that we could spend our time disappearing down. So you have to permanently be a curator, don’t you think? You have to be always thinking, “Okay. There’s a million interesting things out there, but I’d like to get something done, so how am I going to reduce that variety and choose a path to follow?”

Stewart Brand: How much of that process is intention and how much is discovery?

Brian Eno: I think the thing that decides that is whether you’ve got a deadline or not. The most important element in my working life, a lot of the time, is a deadline. The reason it’s important… Well, I’m sure as a writer you probably appreciate deadlines as well. It makes you realize you’ve got to stop pissing around. You have to finally decide on something. So the archive of music that I have now, which is to say after those days of fiddling around like I’ve described with that piece, I’d make a rough mix, they go into the archive — I’ve got 6,790 pieces in the archive now, I noticed today. They’re nearly all unfinished. They’re sort of provocative beginnings. They’re interesting openings. When I get a job like the job of doing this film music, I think, “Okay. I need some music.” So I naturally go to the archive and see what I’ve already started which might be possible to finish as the piece for this film, for example.

So whether I finish something or not completely depends really on whether it has a destination and a deadline. If it’s got a destination, that really helps, because I think, “Okay. It’s not going to be something like that. It’s not going to be that.” It just clears a lot of those possibilities which are amplifying every day. They’re multiplying every day, these possibilities. 

Stewart Brand: One thing that surprised me about your work on this film, is I thought you would have just handed them a handful of cool things and they would then turn it into the right background at the right place from their standpoint. But it sounds like there was interaction, Jason and David, between you and Brian on some of these cuts. What do you want to say about that?

Jason Sussberg: Yeah. I mean, we had an amazing selection of great tracks to plug in and see if they could help amplify the scene visually by giving it a sonic landscape that we could work with. Then, our initial thinking was that’s how we were going to work. But then we ended up going back to you, Brian, and asking for perhaps a different track or a different tone. And then you ended up, actually, making entirely new original music, to our great delight. So one day when we woke up and we had in our inbox original music that you scored specifically for scenes, that was a great delight. We were able to have a back and forth.

Brian Eno: Yes, that’s-

Stewart Brand: Were you giving him visual scenes or just descriptions?

Jason Sussberg: Right. Actually, what we did was we pulled together descriptions of the scenes and then we had… You just wanted, Brian, just a handful of photographs to kind of grok what we were doing. I don’t think you… Maybe you could talk about why you didn’t want the actual scene, but you had a handful of stills and a description of what we were going for tonally, and then you took it from there. What we got back was both surprising and made perfect sense every time.

Brian Eno: I remember one piece in particular that I made in relation to a description and some photographs, which was called, when I made it, it was called “Brand Ostinato.” I don’t know what it became. You’d have to look up your notes to see what title it finally took. But that piece, I was very pleased with. I wanted something that was really dynamic and fresh and bracing, made you sort of stand up. So I was pleased with that one.

But I usually don’t want to see too much of the film, because one of the things I think that music can do is to not just enhance what is already there in the film, which is what most American soundtrack writing is about… Most Hollywood writing is about underlining, about saying, “Oh, this is a sad scene. We’ll make it a little sadder with some music.” Or, “This is an action scene. We’ll give it a little bit more action.” As if the audience is a bit stupid and has to be told, “This is a sad scene. You’re supposed to feel a bit weepy now.” Whereas I thought the other day, what I like better than underlining is undermining. I like this idea of making something that isn’t really quite in the film. It’s a flavor or a taste that you can point to, and people say, “Oh, yes. There’s something different going on there.”

I mean, it would be very easy with Stewart to make music that was kind of epic and, I don’t know, Western or American or Californian or something like that. There are some obvious things you could do. If you were that kind of composer, you’d carefully study Stewart and you’d find things that were Stewart-ish in music and make them. But I thought, “No. What is exciting about this is the shock of the new kind of feeling.” That piece, that particular piece, “Brand Ostinato,” has that feeling, I think, of something that is very strikingly upright and disciplined. This discipline, that’s I think the feeling of it that I like. I don’t think, in that particular part in the film, where that occurs, I don’t think that’s a scene where you would see discipline, unless somebody had suggested it to you by way of a piece of music, for example.

Stewart Brand: And Jason, did you in fact use that piece of music with that part of the film?

Jason Sussberg: Yeah, I don’t think it was exactly where Brian had intended to put it, but hearing the description, what we did was we put that song in a scene where you are going to George Church’s lab, Stewart, and we’re trying to build up George Church as this genius geneticist. So the song was actually, curiously, written about Stewart and Stewart’s character of discipline, but we apply it to another character in the film. However, what you were going for, which is this upright, adventurous, Western spirit, I think is embodied by the work of the Church Lab to de-extinct animals. So it has that same bravado and gusto that you intended, it was just we kind of… And maybe this is what you were referring to about undermining and underlining, I feel like we kind of undermined your original intention and applied it to a different character, and that dialectic was working. Of course, Stewart is in that scene, but I think that song, that track really amplifies the mood that we were going for, which is the end of the first act.

Brian Eno: Usually, when people do music that is about cutting edge science, it’s all very drifty and cosmic. It’s all kind of, “Wow, it’s so weird,” kind of thing. I really wanted to say science is about discipline, actually. It’s about doing things well and doing things right. It’s not hippie-trippy. Of course, you can feel that way about it once it’s done, but I don’t think you do it that way. So I didn’t want to go the trippy route.

David Alvarado: Yeah. We loved it. It still is the anthem of the film for us. I mean, you named it as such, but it just really feels like it embodies Stewart’s quest on all his amazing adventures he’s been on. So that’s fantastic.

Brian Eno: One of the things that is actually really touching about this film is the early life stuff, which of course I never knew anything about. As women always say, “Well, men never ask that sort of question, do they?” And in fact, in my case it’s completely true. I never bothered to ask people how they got going or that kind of autobiographical question. But what strikes me, first of all, your father was quite an important part of the story. I got the feeling that quite a lot of the character that is described in there is attributed to your father has come right through to you as well, this respect for tools and for making things, which is different from the intellectual respect for thinking about things. Often intellectuals respect other thinkers, but they don’t often respect makers in the same way. So, I wonder when you started to become aware that there could be an overlap between those two things, that there was a you that was a making you and there was a thinking you as well? I wonder if there was a point where those two sort of came together for you, in your early life.

Stewart Brand: Well, you’re pointing out something that I hadn’t really noticed as well, frankly, until the film, which is what I remember is that my father was sort of ground and my mother was figure. She was the big event. She got me completely buried in books and thinking, and she was a liberal. I never did learn what my father’s politics were, but they’re probably pretty conservative. He tried to teach me to fish and he was a really desperately awful teacher. He once taught a class of potential MIT students, he failed every one of them. My older brother Mike said, “Why did you do that?” And he said, “Well, they just did not learn the material. They didn’t make it.” And my brother actually said, “You don’t think that says anything about you as their teacher?”

So I kind of discounted —  as I’m making youthful, stupid judgments — him. I think what you pointed out is a very good one. He was trained as a civil engineer at MIT. Another older brother, Pete, went to MIT. I later completely got embedded at MIT at The Media Lab and Negroponte and all of that. In a way I feel more identified with MIT than I do with Stanford where I did graduate. In Stanford I took as many humanities as I could with a science major.

But I think it’s also something that happened with the ’60s, Brian, which is that what we were dropping out of — late beatniks, early hippies, which is my generation — was a construct that universities were imparting, and I imagine British universities have a slightly different version of this than American ones, but still, the Ivy League-type ones. I remember one of the eventual sayings of the hippies was “back to basics,” which we translated as “back to the land,” which turned out to be a mistake, but the back to basics part was pretty good. We had this idea, we were immediately followed by the baby boom. It was the bulge in the snake, the pig in the python. There were so many of us that the world was always asking us our opinion of things, which we wind up taking for granted. You could, as a young person, you could just call a press conference. “I’m a young person. I want to expound some ideas.” And they would show up and write it all seriously down. The Beatles ran into this. It was just hysterical. Pretty soon you start having opinions. 

We were getting Volkswagen Bugs and vans. This is in my mind now because I’m working on this book about maintenance. We were learning how to fix our own cars. Partly it was the either having no money or pretending to have no money, which, by the way, that was me. It turned out I actually had a fair amount, I just ignored it, that my parents had invested in my name. We were eating out of and exploring and finding amazing things basically in garbage cans and debris boxes. Learning how to cook and eat roadkill and make clothing and domes and all these things. This was something that Peter Drucker noticed about that generation, that they were the first set of creatives that took not just art but also in a sense craft and just stuff seriously, and learned… Mostly we were making mistakes with the stuff, but then you either just backed away from it or you learned how to do it decently after all and become a great guitar maker or whatever it might be. That was what the Whole Earth Catalog tapped into, was that desire to not just make your own life, but make your own world.

Brian Eno: I’m trying to think… In my own life, I can remember some games I played as kids that I made up myself. I realized that they were really the first creative things that I ever did. I invented these games. I won’t bother to explain them, they were pretty simple, but I can remember the excitement of having thought of it myself, and thinking, “I made this. I made this idea myself.” I was sort of intrigued by it. I just wondered if there was a moment in your life when you had that feeling of, “This is the pleasure of thinking, the pleasure of coming up with something that didn’t exist before”?

Stewart Brand: There was one and it’s very well expressed in the film, which was the Trips Festival in January 01966. That was the first time that I took charge over something. I’d been going along with Ken Kesey and the Pranksters. I’d been going along with various creative people, USCO, a group of artists on the East Coast, and contributing but not leading. Once I heard from one of the Pranksters, Mike Hagen, that they wanted to do a thing that would be a Trips Festival, kind of an acid test for the whole Bay Area. I knew that they could not pull that off, but that it should happen. I picked up the phone and I started making arrangements for this public event.

And it worked out great. We were lucky in all the ways that you can be lucky in, and not unlucky in any of the ways you can be unlucky. It was a coup. It was a lot of being a tour de force, not by me, but by basically the Bay Area creatives getting together in one place and changing each other and the world. That was the point for me that I had really given myself agency to drive things.

There’s other things that give you reality in the world. Also in the film is when I appeared on the Dick Cavett Show.

Brian Eno: Oh, yes.

Stewart Brand: Which was a strange event for all of us. But the effect it had in my family was that… My father was dead by then, but my mother had always been sort of treating me as the youngest child, needing help. She would send money from time to time, keep me going in North Beach. But once I was on Dick Cavett, which she regularly watched, I had grown up in her eyes. I was now an adult. I should be treated as a peer.

Brian Eno: So no more money.

Stewart Brand: Well… yeah, yeah. Did that ever happen? I think she sort of liked occasionally keeping a token of dependency going. She was very generous with the money.

The great thing of being a hippie is you didn’t need much. I was not an expensive dependent. That was, I think, another thing there that the hippies weren’t, and that makes us freer about being wealthy or not, is that we’ve had perfectly good lives without much money at all. So the money is kind of an interesting new thing that you can get fucked up by or do creatively or just ignore. But you have those choices in a way, I think, that people who are either born to money or who are getting rich young don’t have. They have other interesting situations to deal with. For us, the discipline was not enough money, and for some of them the discipline is too much money, and how do you keep that from killing you.

Brian Eno: Yes. Yeah. I’ll ask the filmmakers a question as well, if I may. It’s a very simple question, but it isn’t actually answered in the film. The question is: why Stewart? Why did you choose to make a film about him? There are so many interesting people in North America, let alone in the West Coast, but what drew you to him in particular?

Jason Sussberg: I’ll answer this, and then I’ll let you take a swipe at this, David. I mean, I’ve always looked up to Stewart from the time that I ran into an old Whole Earth Catalog. It was the Last Whole Earth Catalog, when I was 18 years old, going to college in the year 02000. So this was 25 years after it was written. I sort of dove into it head first and realized this strange artifact from the past actually was a representation of possibilities, a representation of the future. So after that moment, I read a book of Stewart’s that just came out, about the Clock of the Long Now, and after that… I’ve always been an environmentalist and Earth consciousness and trying to think about how to preserve the natural world, but also I believe in technology as a hopeful future that we can have. We can use tools to create a more sustainable world. So Stewart was able to blend these two ideas in a way that seemed uncontroversial, and it really resonated with me as a fan of science and technology and the natural world. So Stewart, pretty much from an early age, was someone I always looked up to.

When David and I went to grad school, we were talking about the problems of the environmental movement, and Stewart was at the time writing a book that would basically later articulate these ideas.

Brian Eno: Oh, yes, good.

Jason Sussberg: And so when that book came out, it was like it just put our foot on the pedals, like, “Wow, we should make a movie of Stewart and his perspective.” But yeah, I was just always a fan of his.

Brian Eno: So that was quite a long time ago, then.

Jason Sussberg: Yeah, 10 years-

Brian Eno: Is that when you started thinking about it?

Jason Sussberg: Yeah, absolutely. I had made a short film of a friend of probably yours, Brian, and of Stewart’s, Lloyd Kahn. It was a short little eight-minute documentary about Lloyd Kahn and how he thought of shelter and of home construction. That was after that moment that I thought, “This is a really rich territory to explore.” I think that actually was 02008, so at that moment I already had the inkling of, wow, this would be a fantastic biographical documentary that nobody had made.

Stewart Brand: I’m curious, what’s David’s interest?

David Alvarado: Yeah, well, I think Jason and I are drawn to complicated stories, and my god, Stewart. There was a moment in college when I almost stopped becoming a filmmaker and wanted to become a geologist. I just was so fascinated by the complexity of looking at the land, being able to read the stratigraphy, for example, of a cliff and understand deep history of how that relates to what the land looks like now. So, I of course came back into film, but I see a lot of that there in your life. I mean, the layers of what you’ve done… The top layer for us is the de-extinction, the idea of resurrecting extinct species to reset ecosystems and repair damage that humans have caused. That could be its own subject, and if it’s all you did, that would be fascinating. But sitting right underneath that sits all these amazing things all the way back to the ’60s. So I think it’s just like my path as an artist to just dig through layers and, oh boy, your life was just full of it. It was a pleasure to be able to do that with you, so thank you for sharing your life with us.

Stewart Brand: Well, thank you for packaging my life for me. As Kevin Kelly says, the movie that you put out is sort of a trailer for the whole body of stuff that you’ve got. But by going through that process with you, and for example digitizing all of my tens of thousands of photographs, and then the interviews and the shooting in various places and having the adventure in Siberia and whatnot, but… When you get to the late 70s, Brian, and if you try to think of your life as an arc or a passage or story or a whole of any kind, it’s actually quite hard, because you’ve got these various telescopic views back to certain points, but they don’t link up. You don’t understand where you’ve been very well. It’s always a mishmash. With John Markoff also doing a book version of my life, it’s actually quite freeing for me to have that done. And Brian, this is where I wish Hermione Lee would do your biography. She would do you a great favor by just, “Here is everything you’ve done, and here is what it all means. My goodness, it’s quite interesting.” And then you don’t have to do that.

Brian Eno: Yeah, I’d be so grateful if she would do that, or if anybody would do that, yes.

Stewart Brand: It’s a real gift in that it’s also a really well done work of art. It has been just delightful for me. I think one of the things, Brian, it’ll be interesting to see which you see in this when you see the film more than once, or maybe you’ve already done so, is you’ve made a great expense of your time and effort, a re-watchable film. And Brian, the music is a big part of this. The music is blended in so much in a landscapy way, that except for a couple of places where it comes to the fore, like when I’m out in the canoe on Higgins Lake and you’re singing away, that it takes a re-listen, a re-viewing of the film to really start to get what the music is doing.

And then, you guys had such a wealth of material, both of my father’s amazing filmmaking and then from the wealth of photography I did, and then the wealth of stuff you found as archivists, I mean, the number of cuts in this film must be some kind of a record for a documentary, the number of images that go blasting by. So, instead of a gallery of photographs, it’s basically a gallery of contact sheets where you’re not looking at the shot I made of so-and-so, you’ve got all 10 of them, but sort of blinked together. That rewards re-viewing, because there’s a lot of stuff where things go by and you go, “Wait, what was that? Oh, no, there’s a new thing. Oh, what was that one? That one’s gone too.” They’re adding up. It’s a nice accumulative kind of drenching of the viewer in things that really rewards…

It’s one of the reasons that I think it’s actually going to do well on people’s video screenings, because they can stop it and go, “Wait a minute. What just happened?” And go back a couple of frames. Whereas in the theater, this is going to go blasting on by. Anyway, that’s my view, that this has been enjoyable to revisit.

Brian Eno: When you first watched… Well, I don’t know at which stage you first started watching what David and Jason had been doing, but were there any kind of nasty surprises, any places where you thought, “Oh god, I wish they hadn’t found that bit of film”?

David Alvarado: That’s a great question. Yeah.

Stewart Brand: Brian, the deal I sort of made with myself and with these guys, and that I made the same one with [John] Markoff, is it’s really their product. I’m delighted to be the raw material, but I won’t make any judgments about their judgments. When I think something is wrong, a photograph that depicts somebody that turns out not to be actually that person, I would speak up and I did do that. I’ve done much more of that sort of thing with Markoff in the book. But whenever there’s interpretation, that’s not my job. I have to flip into it, and it’s easy to be, when you both care about your life and you don’t care about your life, you would have this attitude too, of Brian Eno, yawn, been there done that, got sent a fucking T-shirt. So finding a way to not be bored about one’s life is actually kind of interesting, and that’s seeing through this refraction in a funhouse mirror, in a kaleidoscope of other people’s read, that makes it actually sort of enjoyable to engage.

Brian Eno: Yes. I think one of the things that’s interesting when you watch somebody else’s take on your life, somebody writes a biography of you or recants back to you a period that you lived through, is it makes you aware of how much you constructed the story that you hold yourself. You’ve got this kind of narrative, then I did this and then of course that led to that, and then I did that… And it all sort of makes sense when you tell the story, but when somebody else tells the story, it’s just like I was saying about conspiracy theories, to think that they can come up with a completely different story, and it’s actually equally plausible, and sometimes, frighteningly, even more plausible than the one you’ve been telling yourself.

Stewart Brand: Well, it gets stronger than that, because these are people who’ve done the research. So an example from the film is these guys really went through all my father’s film. There’s stuff in there I didn’t know about. There’s an incredibly sweet photograph of my young mother, my mother being young, and basically cradling the infant, me, and canoodling with me. I’d never seen that before. So I get a blast of, “Oh, mom, how great, thank you,” that I wouldn’t have gotten if they hadn’t done this research.

And lots of times, especially for Markoff’s research…So, Doug Engelbart and The Mother of All Demos, I have a story I’ve been telling for years to myself and to the world of how I got involved in being a sort of filmmaker within that project. It turned out I had just completely forgotten that I’d actually studied Doug Engelbart before any of that, and I was going to put him in an event I was going to organize called the Education Fair, and the whole theory of his approach, very humanist approach to computers and the use of computers, computers basically blending in to human collaboration, was something I got very early. And I did the Trips Festival and he sort of thought I was a showman and then they brought me on as the adviser to the actual production. But the genesis of the event, I’d been telling this wrong story for years. There’s quite a lot of that. As you say, I think our own view of ourselves becomes fiction very quickly.

Brian Eno: Yes. Yes. It’s partly because one wants to see a kind of linear progression and a causality. One doesn’t really want to admit that there was a lot of randomness in it, that if you’d taken that turning on the street that day, life would have panned out completely differently. That’s so disorientating, that thought, that we don’t tolerate it for long. We sort of patch it up to make the story hold together.

Stewart Brand: That’s what you’ll get from the Tom Stoppard biography. Remember that his first serious, well, popular play was Rosencrantz and Guildenstern Are Dead, and it starts with a flip of a coin. It turns out his own past of how he got from Singapore to India and things like that were just these kind of random war-related events that carved a path of chance, chance, chance, chance, that then informed his creative life for the rest of his life. There’s a book coming out from Daniel Kahneman called Noise, that Bachman and Kahneman and another guy have generated. It looks like it’s going to be fantastic. Basically, he’s going beyond Thinking Fast and Slow to…a whole lot of the data that science and our world and the mind deals with is this kind of randomized, stochastic noise, which we then interpret as signal. And it’s not. It’s hard to hold it in your mind, that randomness. It’s one of the things I appreciate from having studied evolution at an impressionable age, is that a lot of evolution is: randomness is not a bad thing that happens. Randomness is the most creative thing that happens.

Brian Eno: Yes. Well, we are born pattern recognizers. If we don’t find them, we’ll construct them. We take all the patterns that we recognize very seriously. We think that they are reality. But they aren’t necessarily exclusive. They’re not exclusive realities.

Jason Sussberg: All right. I hate to end it here. This discussion is really fascinating. We’re getting into some very heady philosophical ideas. But unfortunately, our time is short. So we have to bid both Stewart and Brian farewell. I encourage everybody to go watch the film We Are as Gods, if you haven’t already. Thank you so much for participating in this discussion.

David Alvarado: A special thanks to Stripe Press for helping making this film a reality. Thank you to you, the viewer, for watching, to Stewart for sharing your life, and Brian for this amazing original score.

Brian Eno: Good. Well, good luck with it. I hope it does very well.

Planet DebianMartin Michlmayr: Research on FOSS foundations

I worked on research on FOSS foundations and published two reports:

Growing Open Source Projects with a Stable Foundation

This primer covers non-technical aspects that the majority of projects will have to consider at some point. It also explains how FOSS foundations can help projects grow and succeed.

This primer explains:

  • What issues and areas to consider
  • How other projects and foundations have approached these topics
  • What FOSS foundations bring to the table
  • How to choose a FOSS foundation

You can download Growing Open Source Projects with a Stable Foundation.

Research report

The research report describes the findings of the research and aims to help understand the operations and challenges FOSS foundations face.

This report covers topics such as:

  • Role and activities of foundations
  • Challenges faced and gaps in the service offerings
  • Operational aspects, including reasons for starting an org and choice of jurisdiction
  • Trends, such as the "foundation in a foundation" model
  • Recommendations for different stakeholders

You can download the research report.

Acknowledgments

This research was sponsored by Ford Foundation and Alfred P. Sloan Foundation. The research was part of their Critical Digital Infrastructure Research initiative, which investigates the role of open source in digital infrastructure.

Worse Than FailureCodeSOD: An Exceptional Warning

Pierre inherited a PHP application. The code is pretty standard stuff, for a long-living PHP application which has been touched by many developers with constantly shifting requirements. Which is to say, it's a bit of a mess.

But there's one interesting quirk that shows up in the codebase. At some point, someone working on the project added a kinda stock way they chose to handle exceptions. Future developers saw that, didn't understand it, and copied it to just follow what appeared to be the "standard".

And that's why so many catch blocks look like this:

catch(ZendException $e) { $e = $e // no warning }

Pierre and a few peers spent more time than they should have puzzling over the comment, before they realized that an empty catch block would give you a warning about an "unused variable" in their linter. By setting the variable equal to itself, it got "used".

This is in lieu of, y'know, disabling that warning, or even better- addressing the issue by making sure you don't just blindly swallow exceptions and hope it's not a problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRussell Coker: Links April 2021

Dr Justin Lehmiller’s blog post comparing his official (academic style) and real biographies is interesting [1]. Also the rest of his blog is interesting too, he works at the Kinsey Institute so you know he’s good.

Media Matters has an interesting article on the spread of vaccine misinformation on Instagram [2].

John Goerzen wrote a long post summarising some of the many ways of having a decentralised Internet [3]. One problem he didn’t address is how to choose between them, I could spend months of work to setup a fraction of those services.

Erasmo Acosta wrote an interesting medium article “Could Something as Pedestrian as the Mitochondria Unlock the Mystery of the Great Silence?” [4]. I don’t know enough about biology to determine how plausible this is. But it is a worry, I hope that humans will meet extra-terrestrial intelligences at some future time.

Meredith Haggerty wrote an insightful Medium article about the love vs money aspects of romantic comedies [5]. Changes in viewer demographics would be one factor that makes lead actors in romantic movies significantly less wealthy in recent times.

Informative article about ZIP compression and the history of compression in general [6].

Vice has an insightful article about one way of taking over SMS access of phones without affecting voice call or data access [7]. With this method the victom won’t notice that they are having their sservice interfered with until it’s way too late. They also explain the chain of problems in the US telecommunications industry that led to this. I wonder what’s happening in this regard in other parts of the world.

The clown code of ethics (8 Commandments) is interesting [8].

Sam Hartman wrote an insightful blog post about the problems with RMS and how to deal with him [9]. Also Sam Whitton has an interesting take on this [10]. Another insightful post is by Selam G about RMS long history of bad behavior and the way universities are run [11].

Cory Doctorow wrote an insightful article for Locus about free markets with a focus on DRM on audio books [12]. We need legislative changes to fix this!

Planet DebianJunichi Uekawa: Swapping caps and control in X11.

Swapping caps and control in X11. The configuration is at /etc/default/keyboard, and both virtual console and X11 use that configuration in Debian. For systemd users, systemd-localed.service can be used to configure using localectl(1) command, and Debian systemd has been patched to update /etc/default/keyboard (managed by keyboard-configuration package). Not sure where I find a documentation for this. systemd for Debian is patched with debian/patches/debian/Use-Debian-specific-config-files.patch to change /etc/X11/xorg.conf.d/00-keyboard.conf to /etc/default/keyboard.

Planet DebianRuss Allbery: Review: Beyond Shame

Review: Beyond Shame, by Kit Rocha

Series: Beyond #1
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 270

I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Noelle is a child of Eden, the rich and technologically powerful city of a post-apocalyptic world. As the daughter of a councilman, she had everything she wanted except the opportunity to feel. Eden's religious elite embrace a doctrine of strict Puritanism: Even hugging one's children was frowned upon, let alone anything related to sex. Noelle was too rebellious to settle for that, which is why this book opens with her banished from Eden, ejected into Sector Four. The sectors are the city slums, full of gangs and degenerates and violence, only a slight step up from the horrific farming communes. Luckily for her, she literally stumbles into one of the lieutenants of the O'Kane gang, who are just as violent as their reputations but who have surprising sympathy for a helpless city girl.

My shorthand distinction between romance and erotica is that romance mixes some sex into the plot and erotica mixes some plot into the sex. Beyond Shame is erotica, specifically BDSM erotica. The forbidden sensations that Noelle got kicked out of Eden for pursuing run strongly towards humiliation, which is tangled up in the shame she was taught to feel about anything sexual. There is a bit of a plot surrounding the O'Kanes who take her in, their leader, some political skulduggery that eventually involves people she knows, and some inter-sector gang warfare, but it's quite forgettable (and indeed I've already forgotten most of it). The point of the story is Noelle navigating a relationship with Jasper (among others) that involves a lot of very graphic sex.

I was of two minds about reviewing this. Erotica is tricky to review, since to an extent it's not trying to do what most books are doing. The point is less to tell a coherent story (although that can be a bonus) than it is to turn the reader on, and what turns the reader on is absurdly personal and unpredictable. Erotica is arguably more usefully marked with story codes (which in this case would be something like MF, MMFF, FF, Mdom, Fdom, bd, ds, rom, cons, exhib, humil, tattoos) so that the reader has an idea whether the scenarios in the story are the sort of thing they find hot.

This is particularly true of BDSM erotica, since the point is arousal from situations that wouldn't work or might be downright horrifying in a different sort of book. Often the forbidden or taboo nature of the scene is why it's erotic. For example, in another genre I would complain about the exaggerated and quite sexist gender roles, where all the men are hulking cage fighters who want to control the women, but in male-dominant BDSM erotica that's literally the point.

As you can tell, I wrote a review anyway, primarily because of how I came to read this book. Kit Rocha (which is a pseudonym for the writing team of Donna Herren and Bree Bridges) recently published Deal with the Devil, a book about mercenary librarians in a post-apocalyptic future. Like every right-thinking person, I immediately wanted to read a book about mercenary librarians, but discovered that it was set in an existing universe. I hate not starting at the beginning of things, so even though there was probably no need to read the earlier books first, I figured out Beyond Shame was the first in this universe and the bundle of the first three books was only $2.

If any of you are immediately hooked by mercenary librarians but are back-story completionists, now you know what you'll be getting into.

That said, there are a few notable things about this book other than it has a lot of sex. The pivot of the romantic relationship was more interesting and subtle than most erotica. Noelle desperately wants a man to do all sorts of forbidden things to her, but she starts the book unable to explain or analyze why she wants what she wants, and both Jasper and the story are uncomfortable with that and unwilling to leave it alone. Noelle builds up a more coherent theory of herself over the course of the book, and while it's one that's obviously designed to enable lots of erotic scenes, it's not a bad bit of character development.

Even better is Lex, the partner (sort of) of the leader of the O'Kane gang and by far the best character in the book. She takes Noelle under her wing from the start, and while that relationship is sexualized like nearly everything in this book, it also turns into an interesting female friendship that I would have also enjoyed in a different genre. I liked Lex a lot, and the fact she's the protagonist of the next book might keep me reading.

Beyond Shame also has a lot more female gaze descriptions of the men than is often the case in male-dominant BDSM. The eye candy is fairly evenly distributed, although the gender roles are very much not. It even passes the Bechdel test, although it is still erotica and nearly all the conversations end up being about sex partners or sex eventually.

I was less fond of the fact that the men are all dangerous and violent and the O'Kane leader frequently acts like a controlling, abusive psychopath. A lot of that was probably the BDSM setup, but it was not my thing. Be warned that this is the sort of book in which one of the (arguably) good guys tortures someone to death (albeit off camera).

Recommendations are next to impossible for erotica, so I won't try to give one. If you want to read the mercenary librarian novel and are dubious about this one, it sounds like (although I can't confirm) that it's a bit more on the romance end of things and involves a lot fewer group orgies. Having read this book, I suspect it was entirely unnecessary to have done so for back-story. If you are looking for male-dominant BDSM, Beyond Shame is competently written, has a more thoughtful story than most, and has a female friendship that I fully enjoyed, which may raise it above the pack.

Rating: 6 out of 10

,

LongNowMeet Ty Caudle, The Interval’s New Beverage Director

Long Now is pleased to announce that longtime Interval bartender Ty Caudle will become The Interval’s next Beverage Director. He takes the reins from Todd Carnam, who has moved to Washington, D.C. after a creative three-year run at the helm. 

“We are very excited and grateful to have Ty in such a strong position to make this transition both seamless and inspired,” says Alexander Rose, Long Now’s Executive Director and Founder of The Interval. 

Caudle’s bartending career began at a small backyard party in San Francisco. He was working as a caterer for the event, and when the bartender failed to show, he was thrust into the role despite having zero experience.

“We had no idea what we were doing,” he says, “but there was definitely an energy to bartending that wasn’t otherwise present in catering.”

After a friend gifted him a copy of Imbibe! by David Wondrich, Caudle knew he’d found his calling.

“The book opened up a world that I otherwise would’ve never known,” he says. “It traced the history of forgotten ingredients and techniques, painted a rich tapestry of the world of bartending in the 01800s, and most importantly taught me that tending bar was a legitimate profession, one to be studied and practiced.”

Ty Caudle at The Interval. 

And so he did. Caudle devoured every bartending book he could find, bought esoteric cocktail ingredients, and experimented at home. He visited distilleries in Kentucky, Tequila, Oaxaca, Ireland, and Copenhagen to learn more about how different cultures approached spirit production.

“Those trips cemented my deep respect for the craft and history of distillation,” he says. “Whether on a tropical hillside under a tin roof or in a cacophonous bustling factory, spirit production is one of humanity’s great achievements. As bartenders, we have a responsibility to honor those artisans’ tireless efforts with every martini or manhattan we stir.”

Breaking through in the industry during the Great Recession, however, proved challenging. Caudle eventually landed a gig prepping the bar at the now-shuttered Locanda in the Mission. This led to other bartending opportunities at a small handful of spaces in the same neighborhood as Locanda.

The Interval at Long Now.

The Interval opened its doors in 02014 with Jennifer Colliau as its Beverage Director. Colliau was something of a legend in the Bay Area’s vibrant bar scene, having founded Small Hand Foods after eight years tending bar at San Francisco’s celebrated Slanted Door restaurant.

Caudle was a big fan of Colliau’s work, and promptly responded to an ad for a part-time bartender position at The Interval.

Jennifer Colliau, The Interval’s first Beverage Director. 

“The job listing was decidedly different,” Caudle says. “It gave me a glimpse of how unique The Interval is.”

Following a promising interview with then-Bar Manager Haley Samas-Berry, Caudle returned to The Interval a few days later for a stage. Expecting to find Samas-Berry behind the bar, Caudle was mortified to find Colliau there instead. Caudle was, suffice it to say, a little intimidated:  

I walked over with my shakers and spoons and jigger, hands trembling, and she asked if I wouldn’t mind making drinks with their tools instead. I said, “Sure,” as I walked into the other room to set my things down. Inside I was completely freaking out. It took every bit of my strength to emerge from that space. I already felt in over my head and this amplified it. For the next hour or so I welcomed guests and set down menus and poured water. Every time a drink order came in Jennifer would stand over my shoulder and recite the recipe to me while correcting a litany of technical mistakes that I was making. The torture finally relented and we went upstairs and had a good conversation. But I remember leaving that night thinking there was just no way in hell I was going to get that job. 

Caudle got the job. And now, following years of excellent work, he’s got Colliau’s old job, too. 

We spoke with Caudle about his new role, his approach to cocktail creation and design, and what Interval patrons can expect once the bar fully reopens.


Your promotion to Beverage Director brings the opportunity to try new things, while also contending with a rich legacy from past Beverage Directors Jennifer Colliau and Todd Carnam. What new things are you excited to bring to the table? What do you hope to maintain from the past?

I feel uniquely positioned as I have worked in the space under the tutelage of both Jennifer and Todd. 

Jennifer set the standard and created the beverage identity of The Interval. She taught us that we can’t unknow things. To that end, I’m excited to continue the pursuit of the best version of a beverage, meticulously molding it while uncovering its rich history.

The Interval’s former Beverage Directors Jennifer Colliau and Todd Carnam

Todd is a storyteller and a curmudgeonly romantic at heart. He taught us that a drink can evoke a feeling and connect to a larger narrative, of the cocktail’s role as a totem. I hope to honor that spirit and the creativity it fosters in my approach to menu development.

Foremost, I’m excited to feature wine, beer, and spirits made by people that don’t look like me. I’m personally captivated by the fantastic complexity of what eventually winds up in a glass on the bar. Every drink is the confluence of many brilliant makers and I seek to pay respect to their efforts. I think it is easy for us to forget that alcohol is an agricultural product. It started as a plant in the ground in a corner of the world and so many things had to go right for it to find its way to us. I hope to imbue our staff with a passion for the process of making these delicious products and to craft drinks that honor them.

A trio of selections from The Interval’s Old Fashioned menu.

What’s your approach to cocktail design and creation?

I can be somewhat reluctant to design new drinks. The cocktail world has such a rich history and so many people have contributed across generations. With that in mind, I often find myself focusing on making the very best version of a beverage that we know well or that may have been overlooked. 

It tends to take me a long time to mold a bigger picture of what the theme of a cocktail or a menu should be. Once I have that in place I get excited to uncover pieces that fit into the whole. Our Tiki Not Tiki menu is a great example. After we established that template, I found myself scouring cocktail books and menus for tropical drinks that didn’t fit into the Tiki canon. Each discovery was a revelation, a spark to continue forth.

Mai Tai from The Interval’s Tiki Not Tiki menu. 

What’s one of the most challenging cocktails for The Interval to make? 

Generally, we like to do as much work behind the scenes preparing ingredients and putting things together ahead of time to ensure cocktails get to guests quickly.

The Interval’s take on the Kalimotxo.

I will say that one of our biggest challenges in development came with the Kalimotxo. This simple Spanish blend of Coca Cola and box wine was incredibly difficult to replicate. For starters, it was extremely trying to imitate the singular flavor of coke, eventually replacing its woodsy vanilla with Carpano Antica and its baking spice notes with lots of Angostura. Harder still was finding a red wine that didn’t overpower the rest of the ingredients. In the end, we wound up bringing in an entirely new wine outside of our offerings just to get the final flavor profile we were looking for.

Everyone has different tastes, but what would you recommend as a cocktail for a first-timer to the Interval to highlight what distinguishes the establishment from other cocktail bars?

The Interval’s Navy Gimlet.

The Navy Gimlet perfectly encapsulates what we strive for at The Interval. With the time involved to infuse navy strength gin with lime oil and to slowly filter the finished product, its preparation takes days but arrives to the guest in no time at all. The gimlet has been maligned for decades as a result of artificial ingredients and certain preparations and we’ve done our very best to correct those deficiencies. We make a delicious lime cordial and stir (rather than shake) our pearlescent iteration. It’s a drink with a history, deceptively simple and infinitely refreshing.

A busy evening at The Interval, May 02016. 

What do you think is the biggest misconception people have about tending bar?

I think the physical act of bartending is unnecessarily heralded in the public eye. Anyone can mix drinks. Sure, there are hundreds of classics to memorize and plenty of muscle memory to establish, but that side of tending bar is overwhelmingly a teachable skill.

The component that cannot be taught as easily is hospitality. There is a degree of empathy and emotional availability necessary to do this work that isn’t required in many other professions. Bartenders absorb the energy of every guest that sits in front of them and a genuine desire to serve is essential to providing a superior guest experience. This comes naturally to some and can be a lifelong pursuit for others. Putting aside the day thus far and being truly hospitable behind the bar is the goal we spend our careers striving for. 


For the latest on opening hours, placing to-go orders, and events, head to The Interval’s website, or follow The Interval on Instagram, Twitter, and Facebook.

Worse Than FailureCodeSOD: Absolute Mockery

At a certain point, it becomes difficult to write a unit test without also being able to provide mocked implementations of some of the code. But mocking well is its own art- it's easy to fall into the trap of writing overly complex mocks, or mocking the wrong piece of functionality, and ending up in situations where your tests end up spending more time testing their mocks than your actual code.

Was Rhonda's predecessor thinking of any of those things when writing code? Were they aware of the challenges of writing useful mocks, of managing dependency injection? Or was this Java solution the best they could come up with:

public class Person { private int age; private String name; public int getAge() { if (Testing.isTest) return 27; else return age; } public String getName() { if (Testing.isTest) return "John Smith"; else return name; } // and so on .. }

Every method was written like this. Every method. Each method contained its own mockery, and in turn, made a mockery of test-driven-development.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Cryptogram Security Vulnerabilities in Cellebrite

Moxie Marlinspike has an intriguing blog post about Cellebrite, a tool used by police and others to break into smartphones. Moxie got his hands on one of the devices, which seems to be a pair of Windows software packages and a whole lot of connecting cables.

According to Moxie, the software is riddled with vulnerabilities. (The one example he gives is that it uses FFmpeg DLLs from 2012, and have not been patched with the 100+ security updates since then.)

…we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

This means that Cellebrite has one — or many — remote code execution bugs, and that a specially designed file on the target phone can infect Cellebrite.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

That malicious file could, for example, insert fabricated evidence or subtly alter the evidence it copies from a phone. It could even write that fabricated/altered evidence back to the phone so that from then on, even an uncorrupted version of Cellebrite will find the altered evidence on that phone.

Finally, Moxie suggests that future versions of Signal will include such a file, sometimes:

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

The idea, of course, is that a defendant facing Cellebrite evidence in court can claim that the evidence is tainted.

I have no idea how effective this would be in court. Or whether this runs foul of the Computer Fraud and Abuse Act in the US. (Is it okay to booby-trap your phone?) A colleague from the UK says that this would not be legal to do under the Computer Misuse Act, although it’s hard to blame the phone owner if he doesn’t even know it’s happening.

,

Krebs on SecurityExperian’s Credit Freeze Security is Still a Joke

In 2017, KrebsOnSecurity showed how easy it is for identity thieves to undo a consumer’s request to freeze their credit file at Experian, one of the big three consumer credit bureaus in the United States.  Last week, KrebsOnSecurity heard from a reader who had his freeze thawed without authorization through Experian’s website, and it reminded me of how truly broken authentication and security remains in the credit bureau space.

Experian’s page for retrieving someone’s credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

Dune Thomas is a software engineer from Sacramento, Calif. who put a freeze on his credit files last year at Experian, Equifax and TransUnion after thieves tried to open multiple new payment accounts in his name using an address in Washington state that was tied to a vacant home for sale.

But the crooks were persistent: Earlier this month, someone unfroze Thomas’ account at Experian and promptly applied for new lines of credit in his name, again using the same Washington street address. Thomas said he only learned about the activity because he’d taken advantage of a free credit monitoring service offered by his credit card company.

Thomas said after several days on the phone with Experian, a company representative acknowledged that someone had used the “request your PIN” feature on Experian’s site to obtain his PIN and then unfreeze his file.

Thomas said he and a friend both walked through the process of recovering their freeze PIN at Experian, and were surprised to find that just one of the five multiple-guess questions they were asked after entering their address, Social Security Number and date of birth had anything to do with information only the credit bureau might know.

KrebsOnSecurity stepped through the same process and found similar results. The first question asked about a new mortgage I supposedly took out in 2019 (I didn’t), and the answer was none of the above. The answer to the second question also was none of the above.

The next two questions were useless for authentication purposes because they’d already been asked and answered; one was “which of the following is the last four digits of your SSN,” and the other was “I was born within a year or on the year of the date below.” Only one question mattered and was relevant to my credit history (it concerned the last four digits of a checking account number).

The best part about this lax authentication process is that one can enter any email address to retrieve the PIN — it doesn’t need to be tied to an existing account at Experian. Also, when the PIN is retrieved, Experian doesn’t bother notifying any other email addresses already on file for that consumer.

Finally, your basic consumer (read: free) account at Experian does not give users the option to enable any sort of multi-factor authentication that might help stymie some of these PIN retrieval attacks on credit freezes.

Unless, that is, you subscribe to Experian’s heavily-marketed and confusingly-worded “CreditLock” service, which charges between $14.99 and $24.99 a month for the ability to “lock and unlock your file easily and quickly, without delaying the application process.” CreditLock users can both enable multifactor authentication and get alerts when someone tries to access their account.

Thomas said he’s furious that Experian only provides added account security for consumers who pay for monthly plans.

“Experian had the ability to give people way better protection through added authentication of some kind, but instead they don’t because they can charge $25 a month for it,” Thomas said. “They’re allowing this huge security gap so they can make a profit. And this has been going on for at least four years.”

Experian has not yet responded to requests for comment.

When a consumer with a freeze logs in to Experian’s site, they are immediately directed to a message for one of Experian’s paid services, such as its CreditLock service. The message I saw upon logging in confirmed that while I had a freeze in place with Experian, my current “protection level” was “low” because my credit file was unlocked.

“When your file is unlocked, you’re more vulnerable to identity theft and fraud,” Experian warns, untruthfully. “You won’t see alerts if someone tries to access your file. Banks can check your file if you apply for credit or loans. Utility and service providers can see your credit file.”

Experian says my security is low because while I have a freeze in place, I haven’t bought into their questionable “lock service.”

Sounds scary, right? The thing is — except for the part about not seeing alerts — none of the above statement is true if you already have a freeze on your file. A security freeze essentially blocks any potential creditors from being able to view your credit file, unless you affirmatively unfreeze or thaw your file beforehand.

With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file). It is now free to freeze your credit in all U.S. states and territories.

Experian, like the other consumer credit bureaus, uses their intentionally confusing “lock” terminology to frighten consumers into paying for monthly subscription services. A key selling point for these lock services is they can be a faster way to let creditors peek at your file when you wish to apply for new credit. That may or may not be true in practice, but consider why it’s so important for Experian to get consumers to sign up for their lock programs.

The real reason is that Experian makes money every time someone makes a credit inquiry in your name, and it does not want to do anything to hinder those inquiries. Signing up for a lock service lets Experian continue selling credit report information to a variety of third parties. According to Experian’s FAQ, when locked your Experian credit file remains accessible to a host of companies, including:

-Potential employers or insurance companies

-Collection agencies acting on behalf of companies you may owe

-Companies providing pre-screened credit card offers

-Companies that have an existing credit relationship with you (this is true for frozen files also)

-Personalized offers from Experian, if you choose to receive them

It is annoying that Experian can get away with offering additional account security only to people who pay the company a hefty sum each month to sell their information. It’s also amazing that this sloppy security I wrote about back in 2017 is still just as prevalent in 2021.

But Experian is hardly alone. In 2019, I wrote about how Equifax’s new MyEquifax site made it simple for thieves to lift an existing credit freeze at Equifax and bypass the PIN if they were armed with just your name, Social Security number and birthday.

Also in 2019, identity thieves were able to get a copy of my credit report from TransUnion after successfully guessing the answers to multiple-guess questions like the ones Experian asks. I only found out after hearing from a detective in Washington state, who informed me that a copy of the report was found on a removable drive seized from a local man who was arrested on suspicion of being part of an ID theft gang.

TransUnion investigated and found it was indeed at fault for giving my credit report to ID thieves, but that on the bright side its systems blocked another fraudulent attempt at getting my report in 2020.

“In our investigation, we determined that a similar attempt to fraudulently obtain your report occurred in April 2020, and was successfully blocked by enhanced controls TransUnion has implemented since last year,” the company said. “TransUnion deploys a multi-layered security program to combat the ongoing and increasing threat of fraud, cyber-attacks and malicious activity.  In today’s dynamic threat environment, TransUnion is constantly enhancing and refining our controls to address the latest security threats, while still allowing consumers access to their information.”

For more information on credit freezes (also called a “security freezes”), how to request one, and other tips on preventing identity fraud, check out this story.

If you haven’t done so lately, it might be a good time to order a free copy of your credit report from annualcreditreport.com. This service entitles each consumer one free copy of their credit report annually from each of the three credit bureaus — either all at once or spread out over the year.

Cory DoctorowNorwegian and German editions of How to Destroy Surveillance Capitalism

Thanks to groups of German- and Norwegian-speaking volunteers, there’s now a CC-licensed Norwegian and German edition of How to Destroy Surveillance Capitalism! They join the existing French edition.

Planet DebianSteve Kemp: Writing a text-based adventure game for CP/M

In my previous post I wrote about how I'd been running CP/M on a Z80-based single-board computer.

I've been slowly working my way through a bunch of text-based adventure games:

  • The Hitchhiker's Guide To The Galaxy
  • Zork 1
  • Zork 2
  • Zork 3

Along the way I remembered how much fun I used to have doing this in my early teens, and decided to write my own text-based adventure.

Since I'm not a masochist I figured I'd write something with only three or four locations, and solicited facebook for ideas. Shortly afterwards a "plot" was created and I started work.

I figured that the very last thing I wanted to be doing was to be parsing text-input with Z80 assembly language, so I hacked up a simple adventure game in C. I figured if I could get the design right that would ease the eventual port to assembly.

I had the realization pretty early that using a table-driven approach would be the best way - using structures to contain the name, description, and function-pointers appropriate to each object for example. In my C implementation I have things that look like this:

{name: "generator",
 desc: "A small generator.",
 use: use_generator,
 use_carried: use_generator_carried,
 get_fn: get_generator,
 drop_fn: drop_generator},

A bit noisy, but simple enough. If an object cannot be picked up, or dropped, the corresponding entries are blank:

{name: "desk",
 desc: "",
 edesc: "The desk looks solid, but old."},

Here we see something that is special, there's no description so the item isn't displayed when you enter a room, or LOOK. Instead the edesc (extended description) is available when you type EXAMINE DESK.

Anyway over a couple of days I hacked up the C-game, then I started work porting it to Z80 assembly. The implementation changed, the easter-eggs were different, but on the whole the two things are the same.

Certainly 99% of the text was recycled across the two implementations.

Anyway in the unlikely event you've got a craving for a text-based adventure game I present to you:

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 04)

This week on my podcast, part four of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Planet DebianVishal Gupta: Ramblings // On Sikkim and Backpacking

What I loved the most about Sikkim can’t be captured on cameras. It can’t be taped since it would be intrusive and it can’t be replicated because it’s unique and impromptu. It could be described, as I attempt to, but more importantly, it’s something that you simply have to experience to know.

Now I first heard about this from a friend who claimed he’d been offered free rides and Tropicanas by locals after finishing the Ladakh Marathon. And then I found Ronnie’s song, whose chorus goes : “Dil hai pahadi, thoda anadi. Par duniya ke maya mein phasta nahi” (My heart belongs to the mountains. Although a little childish, it doesn’t get hindered by materialism). While the song refers his life in Manali, I think this holds true for most Himalayan states.

Maybe it’s the pleasant weather, the proximity to nature, the sense of safety from Indian Army being round the corner, independence from material pleasures that aren’t available in remote areas or the absence of the pollution, commercialisation, & cutthroat-ness of cities, I don’t know, there’s just something that makes people in the mountains a lot kinder, more generous, more open and just more alive.

Sikkimese people, are honestly some of the nicest people I’ve ever met. The blend of Lepchas, Bhutias and the humility and the truthfulness Buddhism ingrains in its disciples is one that’ll make you fall in love with Sikkim (assuming the views, the snow, the fab weather and food, leave you pining for more).

As a product of Indian parenting, I’ve always been taught to be wary of the unknown and to stick to the safer, more-travelled path but to be honest, I enjoy bonding with strangers. To me, each person is a storybook waiting to be flipped open with the right questions and the further I get from home, the wilder the stories get. Besides there’s something oddly magical about two arbitrary curvilinear lines briefly running parallel until they diverge to move on to their respective paths. And I think our society has been so busy drawing lines and spreading hate that we forget that in the end, we’re all just lines on the universe’s infinite canvas. So the next time you travel, and you’re in a taxi, a hostel, a bar, a supermarket, or on a long walk to a monastery (that you’re secretly wishing is open despite a lockdown), strike up a conversation with a stranger. Small-talk can go a long way.


Header icon made by Freepik from www.flaticon.com

Worse Than FailureCodeSOD: Documentation on Your Contract

Josh's company hired a contracting firm to develop an application. This project was initially specced for just a few months of effort, but requirements changed, scope changed, members of the development team changed jobs, new ones needed to be on-boarded. It stretched on for years.

Even through all those changes, though, each new developer on the project followed the same coding standards and architectural principles as the original developers. Unfortunately, those standards were "meh, whatever, it compiled, right?"

So, no, there weren't any tests. No, the code was not particularly readable or maintainable. No, there definitely weren't any comments, at least if you ignore the lines of code that were commented out in big blocks because someone didn't trust source control.

But once the project was finished, the code was given back to Josh's team. "There you are," management said. "You support this now."

Josh and the rest of his team had objections to this. Nothing about the code met their own internal standards for quality, and certainly it wasn't up to the standards specified in the contract.

"Well, yes," management replied, "but we've exhausted the budget."

"Right, but they didn't deliver what the contract was for," the IT team replied.

"Well, yes, but it's a little late to bring that up."

"That's literally your job. We'd fire a developer who handed us this code."

Eventually, management caved on documentation. Things like "code quality" and "robust testing" weren't clearly specified in the contract, and there was too much wiggle room to say, "We robustly tested it, you didn't say automated tests." But documentation was listed in the deliverables, and was quite clearly absent. So management pushed back: "We need documentation." The contractor pushed back in turn: "We need money."

Eventually, Josh's company had to spend more money to get the documentation added to the final product. It was not a trivial sum, as it was a large piece of software, and would take a large number of billable hours to fully document.

This was the result:

/** * Program represents a Program and its attributes. */

or

/** * Customer represents a Customer and its attributes. */
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianJunichi Uekawa: wake on lan.

wake on lan. I have not been able to get wake on lan working. I wonder if poweroff command is powering off the system too much and losing power on the ethernet too. Do I need to suspend?

Planet DebianJunichi Uekawa: Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it.

Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it. I wanted to try out the ryzen CPU. I haven't used a physical x86-64 Debian desktop machine since I threw away my Athlon 64 machine (dx5150), so that's like 15 years? Since then my main devices were macbooks and virtual machines (on GCE and Sakura) and raspberry pi. I got buster installed just fine. Finding the right keystrokes after boot was challenging because the graphical UI doesn't say anything. For BIOS set up to disable secure boot (F1 to enter set up: I wanted to play with kernel) and finding the keystroke to choose the boot disk (F10 to enter the dialog; I needed to choose the one labeled USB CD drive although I put in a USB SD card reader with installer image). The installation went fine for console, but getting X up was tricky, the support for the graphics (Renoir) part of the chip was added in kernel 5.5. Bullseye was 4.19 and I wasn't too comfortable with just updating the kernel. I ended up going for dist-upgrade to Bullseye. With Bullseye default kernel 5.10, X started without problems. So far I only tried out Emacs.

Planet DebianDominique Dumont: An improved GUI for cme and Config::Model

I’ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model).

Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:

To workaround this problem, I’ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing.

This is now fixed. The Filter widget is now working in a more consistent way.

In the example below, I’ve typed “IdentityFile” (1) in the Filter widget to show the identityFile used for various hosts (2):

Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on “hide empty value” checkbox to show only the hosts that use a specific identity file:

I hope that this new behavior of the Filter box will make this project more useful.

The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out.

All the best

Planet DebianSteinar H. Gunderson: JavaScript madness

Yesterday, I had the problem that while socket.io from the browser would work just fine against a given server endpoint (which I do not control), talking to the same server from Node.js would just give hangs and/or inscrutinable “7:::1” messages (which I later learned meant “handshake missing”).

To skip six hours of debugging, the server set a cookie in the initial HTTP handshake, and expected to get it back when opening a WebSocket, presumably to steer the connection to the same backend that got the handshake. (Chrome didn't show the cookie in the WS debugging, but Firefox did.) So we need to keep track of chose cookies. While still remaining on socket.io 0.9.5 (for stupid reasons). No fear, we add this incredibly elegant bit of code:

var io = require('socket.io-client');
// Hook into XHR to pick out the cookie when we receive it.
var my_cookie;
io.util.request = function() {
        var XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
        var xhr = new XMLHttpRequest();
        xhr.setDisableHeaderCheck(true);
        const old_send = xhr.send;
        xhr.send = function() {
                // Add our own readyStateChange hook in front, to get the cookie if we don't have it.
                xhr.old_onreadystatechange = xhr.onreadystatechange;
                xhr.onreadystatechange = function() {
                        if (xhr.readyState == xhr.HEADERS_RECEIVED) {
                                const cookie = xhr.getResponseHeader('set-cookie');
                                if (cookie) {
                                        my_cookie = cookie[0].split(';')[0];
                                }
                        }
                        xhr.old_onreadystatechange.call(xhr, arguments);
                };
                // Set the cookie if we have it.
                if (my_cookie) {
                        xhr.setRequestHeader("Cookie", my_cookie);
                }
                return old_send.call(this, arguments);
        };
        return xhr;
};
;
// Now override the socket.io WebSockets transport to include our header.
io.Transport['websocket'].prototype.open = function() {
        const query = io.util.query(this.socket.options.query);
        const WebSocket = require('ws');
        // Include our cookie.
        let options = {};
        if (my_cookie) {
                options['headers'] = { 'Cookie': my_cookie };
        }
        this.websocket = new WebSocket(this.prepareUrl() + query, options);
        // The rest is just repeated from the existing function.
        const self = this;
        this.websocket.onopen = function () {
                self.onOpen();
                self.socket.setBuffer(false);
        };
        this.websocket.onmessage = function (ev) {
                self.onData(ev.data);
        };
        this.websocket.onclose = function () {
                self.onClose();
                self.socket.setBuffer(true);
        };
        this.websocket.onerror = function (e) {
                self.onError(e);
        };
        return this;
};
// And now, finally!
var socket = io.connect('https://example.com', { transports: ['websocket'] });

It's a reminder that talking HTTP and executing JavaScript does not make you into a (headless) browser. And that you shouldn't let me write JavaScript. :-)

(Apologies for the lack of blank lines; evidently, they confuse Markdown.)

Planet DebianRussell Coker: Scanning with a MFC-9120CN on Bullseye

I previously wrote about getting a Brother MFC-9120CN multifunction printer/scanner to print on Linux [1]. I had also got it scanning which I didn’t blog about.

found USB scanner (vendor=0x04f9, product=0x021d) at libusb:003:002

I recently upgraded that Linux system to Debian/Testing (which will soon be released as Debian/Bullseye) and scanning broke. The command sane-find-scanner would find the USB connected scanner (with the above output), but “scanimage -L” didn’t.

It turned out that I had to edit /etc/sane.d/dll.d/hplip which had a single uncommented line of “hpaio” and replace that with “brother3” to make SANE load the driver /usr/lib64/sane/libsane-brother3.so from the brscan3 package (which Brother provided from their web site years ago).

I have the following script to do the scanning (which can run as non-root):

#!/bin/bash
set -e
if [ "$1" == "" ]; then
  echo "specify output filename"
  exit 1
fi

TMP=$(mktemp)

scanimage > $TMP
convert $TMP $1
rm $TMP

Final Note

This blog post doesn’t describe everything that needs to be done to setup a scanner, I already had part of it setup from 10 years ago. But for anyone who finds this after having trouble, /etc/sane.d/dll.d is one place you should look for important configuration (especially if sane-find-scanner works and “scanimage -L” fails). Also the Brother drivers are handy to have although I apparently had it working in the past with the hpaio driver from HP (the Brother device emulates a HP device).

Planet DebianRuss Allbery: Review: Learning React

Review: Learning React, by Alex Banks & Eve Porcello

Publisher: O'Reilly
Copyright: June 2020
ISBN: 1-4920-5172-1
Format: Trade paperback
Pages: 287

My first JavaScript project was a React frontend to a REST service. As part of that project, I read two books: JavaScript: The Definitive Guide to learn the language foundation and this book to learn the framework on top of it. This was an unintentional experiment in the ways programming books can approach the topic.

I commented in my review of JavaScript: the Definitive Guide that it takes the reference manual approach to the language. Learning React is the exact opposite. It's goal-driven, example-heavy, and has a problem and solution structure. The authors present a sample application, describe some desired new feature or deficiency in it, and then introduce the specific React technique that solves that problem. There is some rewriting of previous examples using more sophisticated techniques, but most chapters introduce new toy applications along with new parts of the React framework.

The best part of this book is its narrative momentum, so I think the authors were successful at their primary goal. The first eight chapters of the book (more on the rest of the book in a moment) feel like a whirlwind tour where one concept flows naturally into the next and one's questions while reading about one technique are often answered in the next section. I thought the authors tried too hard in places and overdid the enthusiasm, but it's very readable in a way that I think may appeal to people who normally find programming books dry. Learning React is also firm and definitive about the correct way to use React, which may appeal to readers who only want to learn the preferred way of using the framework. (For example, React class components are mentioned briefly, mostly to tell the reader not to use them, and the rest of the book only uses functional components.)

I had two major problems with this book, however. The first is that this breezy, narrative style turns out to be awful when one tries to use it as a reference. I read through most of this book with both enjoyment and curiosity, sat down to write a React component, and immediately struggled to locate the information I needed. Everything felt logically connected when I was focusing on the problems the authors introduced, but as soon as I started from my own problem, the structure of the book fell apart. I had to page through chapters to locate some nugget buried in the text, or re-read sections of the book to piece together which motivating problem my code was most similar to. It was a frustrating experience.

This may be a matter of learning style, since this is why I prefer programming books with a reference structure. But be warned that I can't recommend this book as a reference while you're programming, nor does it prepare you to use the official React documentation as a reference.

The second problem is less explicable and less defensible. I don't know what happened with O'Reilly's copy-editing for this book, but the code snippets are a train wreck. The Amazon reviews are full of people complaining about typos, syntax errors, omitted code, and glaring logical flaws, and they are entirely correct. It's so bad that I was left wondering if a very early, untested draft of the examples was somehow substituted into the book at the last minute by mistake.

I'm not the sort of person who normally types code in from a book, so I don't care about a few typos or obvious misprints as long as the general shape is correct. The general shape was not correct. In a few places, the code is so completely wrong and incomplete that even combined with the surrounding text I was unable to figure out what it was supposed to be. It's possible this is fixed in a later printing (I read the June 2020 printing of the second edition), but otherwise beware. The authors do include a link to a GitHub repository of the code samples, which are significantly different than what's printed in the book, but that repository is incomplete; many of the later chapter examples are only links to JavaScript web sandboxes, which bodes poorly for the longevity of the example code.

And then there's chapter nine of this book, which I found entirely baffling. This is a direct quote from the start of the chapter:

This is the least important chapter in this book. At least, that's what we've been told by the React team. They didn't specifically say, "this is the least important chapter, don't write it." They've only issued a series of tweets warning educators and evangelists that much of their work in this area will very soon be outdated. All of this will change.

This chapter is on suspense and error boundaries, with a brief mention of Fiber. I have no idea what I'm supposed to do with this material as a reader who is new to React (and thus presumably the target audience). Should I use this feature? When? Why is this material in the book at all when it's so laden with weird stream-of-consciousness disclaimers? It's a thoroughly odd editorial choice.

The testing chapter was similarly disappointing in that it didn't answer any of my concrete questions about testing. My instinct with testing UIs is to break out Selenium and do integration testing with its backend, but the authors are huge fans of unit testing of React applications. Great, I thought, this should be interesting; unit testing seems like a poor fit for UI code because of how artificial the test construction is, but maybe I'm missing some subtlety. Convince me! And then the authors... didn't even attempt to convince me. They just asserted unit testing is great and explained how to write trivial unit tests that serve no useful purpose in a real application. End of chapter. Sigh.

I'm not sure what to say about this book. I feel like it has so many serious problems that I should warn everyone away from it, and yet the narrative introduction to React was truly fun to read and got me excited about writing React code. Even though the book largely fell apart as a reference, I still managed to write a working application using it as my primary reference, so it's not all bad. If you like the problem and solution style and want a highly conversational and informal tone (that errs on the side of weird breeziness), this may still be the book for you. Just be aware that the code examples are a trash fire, so if you learn from examples, you're going to have to chase them down via the GitHub repository and hope that they still exist (or get a later edition of the book where this problem has hopefully been corrected).

Rating: 6 out of 10

Planet DebianAntoine Beaupré: Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:

Attic

Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog

Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin

Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas

A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled

Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)

,

Planet DebianGunnar Wolf: FLISOL • Talking about Jitsi

Every year since 2005 there is a very good, big and interesting Latin American gathering of free-software-minded people. Of course, Latin America is a big, big, big place, and it’s not like we are the most economically buoyant region to meet in something equiparable to FOSDEM.

What we have is a distributed free software conference — originally, a distributed Linux install-fest (which I never liked, I am against install-fests), but gradually it morphed into a proper conference: Festival Latinoamericano de Instalación de Software Libre (Latin American Free Software Installation Festival)

This FLISOL was hosted by the always great and always interesting Rancho Electrónico, our favorite local hacklab, and has many other interesting talks.

I like talking about projects where I am involved as a developer… but this time I decided to do otherwise: I presented a talk on the Jitsi videoconferencing server. Why? Because of the relevance videoconferences have had over the last year.

So, without further ado — Here is a video I recorded locally from the talk I gave (MKV), as well as the slides (PDF).

Sam VargheseAll the news (apart from the Middle East issue) that’s fit to print

The Saturday Paper — as its name implies — is a weekend newspaper published from Melbourne, Australia. Given this, it rarely has any real news, but some of the features are well-written.

There is a column called Gadfly (again the name would indicate what it is about) which is extremely well-written and is one of the articles that I read every week. It was written for some years by one Richard Ackland, a lawyer with very good writing skills, and is now penned by one Sami Shah, an Indian, who is, again a good writer. Gadfly is funny and, like most of the opinion content in the paper, is left-oriented.

The same cannot be said of some of the other writers. Karen Middleton and Rick Morton fall into the category of poor writers, though the latter sometimes does provide a story that has not been run anywhere else. Middleton can only be described as a hack.

Mike Seccombe is another of the good writers and, when he figures on the day’s menu, one can be assured that the content will be good. Another good writer, David Marr, has now gone missing; indeed, he is not writing for any newspaper at the moment.

But the one fault line that The Saturday Paper has is that it will never cover the Middle East. The owner, Morry Schwartz [seen below in an image used courtesy of Fairfax], leans towards supporting the right-wing Israeli leader Benjamin Netanyahu and thus no matter what atrocities are being perpetrated on the Palestinians, you can be assured that not even a word will not appear in this newspaper.

Critics of the paper avoid mentioning this, in keeping with the habit prevalent in the West, of never saying anything that could be construed as being critical of Israel.

This proclivity of Schwartz was noticed early on and mentioned by a couple of Australian writers. One, Tim Robertson, had this to say when the paper had just started out: “…the Saturday Paper’s coverage of Israel’s assault on Gaza has been conspicuously, well, non-existent. As the death toll rises and more atrocities are committed, the Saturday Paper’s pages remain, to date, devoid of any comment.”

Explaining this, John van Tiggelen, a former editor of The Monthly (another Schwartz publication) said: ” mean, it’s seen as a Left-wing publication, but the publisher is very Right-wing on Israel […] And he’s very much to the, you know, Benjamin Netanyahu end of politics. So, you can’t touch it; just don’t touch it. It’s a glass wall.”

Australian media are very touchy about Israel. One of the country’s better writers, Mike Carlton, lost a plum job with the former Fairfax Media — now absorbed into the publishing and broadcasting firm, Nine Entertainment — when he criticised Israel over one of its attacks on Gaza.

And some supporters of Israel in Melbourne are quite powerful. Fairfax had – and still has – a rather juvenile columnist named Julie Szego. When one of her columns was rejected by the then editor, Paul Ramadge (the staff used to say of him, “Ramadge rhymes with damage”), she ran to Fairfax board member Mark Leibler and requested him to intervene. Hey presto, the column was published.

Of course, it is the prerogative of an editor or owner to keep out what he/she does not want published. But is one is given to describing one’s publication as a newspaper and then ignores one of the world’s major issues, then one’s credibility does tend to suffer.

Planet DebianAntoine Beaupré: A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

Planet DebianJoey Hess: here's your shot

The nurse releases my shoulder and drops the needle in a sharps bin, slaps on a smiley bandaid. "And we're done!" Her cheeryness seems genuine but a little strained. There was a long line. "You're all boosted, and here's your vaccine card."

Waiting out the 15 minutes in observation, I look at the card.

Moderna COVID-19/22 vaccine booster
3/21/2025              lot #5829126

  🇺🇸 NOT A VACCINE PASSPORT 🇺🇸

(Tear at perforated line.)
- - - - - - - - - - - - - - - - - -

Here's your shot at
$$ ONE HUNDRED MILLION $$

       Scratch
       and win

I bite my nails, when I'm not wearing this mask. So I scrub inneffectively at the grainy silver box. Not like the woman across from me, three kids in tow, who's zipping through her sheaf of scratchers.

The message on mine becomes clear: 1 month free Amazon Prime

Ah well.

,

Planet DebianThomas Goirand: Puppet and OS detection

As you may know, Puppet uses “facter” to get facts about the machine it is about to configure. That’s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name … oh no! This thing is really stupid … Here’s the code one has to do to be compatible with puppet from version 3 up to 5:

if $::lsbdistcodename == undef{
# This works around differences between facter versions
if $facts['os']['lsb'] != undef{
$distro_codename = $facts['os']['lsb']['distcodename']
}else{
$distro_codename = $facts['os']['distro']['codename']
}
}else{
$distro_codename = downcase($::lsbdistcodename)
}

Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn’t an array before (but a hash), so in Jessie, it breaks with the error message “facts is not a hash or array when accessing it with os”. So, one need the full code above to make this work.

It’s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I’m publishing this in the hope to avoid others to fall in the same trap as I did.

Planet DebianMatthew Garrett: An accidental bootsplash

Back in 2005 we had Debconf in Helsinki. Earlier in the year I'd ended up invited to Canonical's Ubuntu Down Under event in Sydney, and one of the things we'd tried to design was a reasonable graphical boot environment that could also display status messages. The design constraints were awkward - we wanted it to be entirely in userland (so we didn't need to carry kernel patches), and we didn't want to rely on vesafb[1] (because at the time we needed to reinitialise graphics hardware from userland on suspend/resume[2], and vesa was not super compatible with that). Nothing currently met our requirements, but by the time we'd got to Helsinki there was a general understanding that Paul Sladen was going to implement this.

The Helsinki Debconf ended being an extremely strange event, involving me having to explain to Mark Shuttleworth what the physics of a bomb exploding on a bus were, many people being traumatised by the whole sauna situation, and the whole unfortunate water balloon incident, but it also involved Sladen spending a bunch of time trying to produce an SVG of a London bus as a D-Bus logo and not really writing our hypothetical userland bootsplash program, so on the last night, fueled by Koff that we'd bought by just collecting all the discarded empty bottles and returning them for the deposits, I started writing one.

I knew that Debian was already using graphics mode for installation despite having a textual installer, because they needed to deal with more complex fonts than VGA could manage. Digging into the code, I found that it used BOGL - a graphics library that made use of the VGA framebuffer to draw things. VGA had a pre-allocated memory range for the framebuffer[3], which meant the firmware probably wouldn't map anything else there any hitting those addresses probably wouldn't break anything. This seemed safe.

A few hours later, I had some code that could use BOGL to print status messages to the screen of a machine booted with vga16fb. I woke up some time later, somehow found myself in an airport, and while sitting at the departure gate[4] I spent a while staring at VGA documentation and worked out which magical calls I needed to make to have it behave roughly like a linear framebuffer. Shortly before I got on my flight back to the UK, I had something that could also draw a graphical picture.

Usplash shipped shortly afterwards. We hit various issues - vga16fb produced a 640x480 mode, and some laptops were not inclined to do that without a BIOS call first. 640x400 worked basically everywhere, but meant we had to redraw the art because circles don't work the same way if you change the resolution. My brief "UBUNTU BETA" artwork that was me literally writing "UBUNTU BETA" on an HP TC1100 shortly after I'd got the Wacom screen working did not go down well, and thankfully we had better artwork before release.

But 16 colours is somewhat limiting. SVGALib offered a way to get more colours and better resolution in userland, retaining our prerequisites. Unfortunately it relied on VM86, which doesn't exist in 64-bit mode on Intel systems. I ended up hacking the X.org x86emu into a thunk library that exposed the same API as LRMI, so we could run it without needing VM86. Shockingly, it worked - we had support for 256 colour bootsplashes in any supported resolution on 64 bit systems as well as 32 bit ones.

But by now it was obvious that the future was having the kernel manage graphics support, both in terms of native programming and in supporting suspend/resume. Plymouth is much more fully featured than Usplash ever was, but relies on functionality that simply didn't exist when we started this adventure. There's certainly an argument that we'd have been better off making reasonable kernel modesetting support happen faster, but at this point I had literally no idea how to write decent kernel code and everyone should be happy I kept this to userland.

Anyway. The moral of all of this is that sometimes history works out such that you write some software that a huge number of people run without any idea of who you are, and also that this can happen without you having any fucking idea what you're doing.

Write code. Do crimes.

[1] vesafb relied on either the bootloader or the early stage kernel performing a VBE call to set a mode, and then just drawing directly into that framebuffer. When we were doing GPU reinitialisation in userland we couldn't guarantee that we'd run before the kernel tried to draw stuff into that framebuffer, and there was a risk that that was mapped to something dangerous if the GPU hadn't been reprogrammed into the same state. It turns out that having GPU modesetting in the kernel is a Good Thing.

[2] ACPI didn't guarantee that the firmware would reinitialise the graphics hardware, and as a result most machines didn't. At this point Linux didn't have native support for initialising most graphics hardware, so we fell back to doing it from userland. VBEtool was a terrible hack I wrote to try to re-execute the system's graphics hardware through a range of mechanisms, and it worked in a surprising number of cases.

[3] As long as you were willing to deal with 640x480 in 16 colours

[4] Helsinki-Vantaan had astonishingly comfortable seating for time

comment count unavailable comments

Kevin RuddABC NewsRadio: Earth Day Summit

E&OE TRANSCRIPT
RADIO INTERVIEW
ABC NEWSRADIO
23 APRIL 2021

Topics: US climate summit; Murdoch Royal Commission

Thomas Oriti
Leaders of more than 40 countries have held a global summit overnight on the world’s response to climate change. They spoke of the urgent need to save the planet from global warming and talked of a jobs boom in the coming years from clean energy technologies. It was hosted by the US President Joe Biden. The US made a commitment to reduce carbon emissions by 50% by the year 2030. The UK says it will cut emissions by 75% by 2035. But let’s look at the Australian perspective. Before the summit began, Australia announced it would not be changing its commitment to a 26-28% reduction by the turn of the next decade. Now Kevin Rudd is a former Australian Prime Minister and president of the Asia Society in New York who joins us live now. Mr Rudd, good morning.

Kevin Rudd
Good morning.

Thomas Oriti
Thank you for your time. You have attended similar high-level climate summits in the past. What kind of standing does Australia have with no new commitments overnight?

Kevin Rudd
A deeply diminished standing is the honest response to that, and that’s a reflection of the views of governments around the Western world and frankly in the emerging world as well. Australia can and should do more. And it’s not simply a question of political atmospherics here; there’s basic science involved in this. If we could keep temperature increases globally, on average, around 1.5 degrees centigrade increased by the end of this century then what it means is we have to move to carbon neutrality by mid-century. To get to carbon neutrality by mid-century, we’ve got to radically reduce our carbon emissions before 2030 with new targets. Other countries have done that. Australia is not.

Thomas Oriti
But Scott Morrison would argue that he is doing something. I mean, over the last two days, we’ve seen a combined $1 billion investment in clean technology. And he said to the summit that his government’s focus is a technology-driven approach to mitigating emissions, saying reaching net-zero he is based on the how and not the when. I mean, what do you make of that sort of approach, focusing on technology?

Kevin Rudd
Well that’s Mr Morrison catering to his own domestic political constituency rather than act of appropriate international leadership by the Prime Minister of Australia at a major global summit to bring about real carbon reductions. The bottom line is the planet doesn’t wait for Mr Morrison to say ‘well, hydrogen will come on stream in X year and and coal reduction targets will come on-stream in Y year’. The reason why the international community, led by the United States in what has been remarkably successful summit minus Australia, is talking about mid-century carbon neutrality, new nationally determined contributions between now and 2030, is to make the mathematics and the science stack up to keep temperature increases within 1.5 degrees. What we’ve heard from Mr Morrison instead is frankly just a bunch of politically driven posturing which doesn’t add up and I think in the international community is treated with contempt, which is why he was heard make his contribution so far down the batting order.

Thomas Oriti
OK well we look at the international community. American officials are reportedly dissatisfied with Australia’s approach. The Biden administration has said it will try to pressure other countries to do more. I mean, how much of an impact do you think that could have on the Morison government?

Kevin Rudd
Well so far, if Morrison was to work out that the United States as our principal ally, who we need in multiple areas of our international policy interest, is making this demand clear of the Australian Government, he really does need to begin to adjust now – in fact, if not yesterday. But if that persuasion doesn’t work, there’s something else rolling down the railway tracks towards Australia, which is so-called border adjustment tariffs, now being actively debated, deliberated on and decided both in Brussels and considered also in Washington to effectively impose a tax on those countries which refuse to take their share of the global burden in bringing down carbon emissions. So if it’s not going to be, as it were, inducement from the US through our alliance relationship with Washington, then there is the threat of punitive financial action which would affect the entire Australian economy. But you know something? Australia as a responsible middle power in the world, and as the driest continent on Earth, for God’s sake, we should be acting as the global leaders here, not the global wooden-spooners.

Thomas Oriti
You wrote an opinion piece in The Guardian this week, Mr Rudd with another former prime minister, Malcolm Turnbull, and how Australia’s ambition on climate change is held back by what you’re saying is a toxic mix of right-wing politics, media, and vested interests. I want to pick up on that last bit. Who are these vested interests and what’s their role?

Kevin Rudd
Well this has become a matter of political raw red meat for the Liberal Party and the National Party to go and chant the coal mantra. That’s one element of it, it’s part of the internal dynamics of the Liberal and National parties. Secondly, I didn’t say the media, I said the Murdoch media, and the Murdoch media has run — and Malcolm Turnbull agrees with — me a vicious campaign against effective climate change action in this country for more than a decade now. And because of their power in the print media in this country, where they have 70% of the print ownership, they have shaped and influenced significantly the terms of our national debate. And the third element in all this, of course, they’re our own big hydrocarbon companies, led by companies like BHP, which have been dragging the chain on this for a very long time. Put those three together, and they can’t hydrocarbon lobby’s trade union, which is the Minerals Council of Australia, this represents a very powerful potent force in Australian politics, which I’ve had to contend with as prime minister and they ultimately prevailed against me; which Malcolm Turnbull had to work against when he was Prime Minister, they prevailed against him; frankly, what is being lost as a consequence of this is effective, clear Australian international leadership on something which matters for our environment, and economy for the future.

Thomas Oriti
Just pick up on something you said a moment ago about the Murdoch media, Mr Rudd, the former US Director of National Intelligence, James Clapper, has backed to your call for a royal commission into Rupert Murdoch’s media empire here in Australia. What do you may give that support and where are you at with your petition at the moment?

Kevin Rudd
Well, the result of our petition which attracted more than half a million signatures within 28 days across Australia — because the system collapsed, we suspect hundreds of thousands of petitioners in addition to that — the Senate decided to commission its own investigation into the future of media diversity. It continues to take evidence from myself, Malcolm Turnbull and others, including the media proprietors, on what we do on the future of this extraordinary monopoly which the Murdoch media has in Australia. It is the highest concentration of print media ownership anywhere in the Western world. Now, when Jim Clapper intervenes as the former director of national intelligence in the United States, what Clapper is saying is the impact of Murdoch there in America, where he is not a majority player, but he’s an aggressive player through Fox News, is that untrammelled this Fox media beast has significantly derailed the potentially for consensus in American politics, not just on climate change, but across a whole range of pressing challenges facing the United States. So he’s sending a clarion clear message, that if we’re going to have Fox News exercise that sort of influence in Australia, through Sky News, which is now having a huge impact across social media platforms and YouTube, then our country prospectively becomes ungovernable like the United States has become in large part in recent years.

Thomas Oriti
Kevin Rudd, thanks very much for joining us this morning.

Kevin Rudd
Good to be with you.

Thomas Oriti
Former Australian Prime Minister Kevin Rudd, who is the president of the Asia Society in New York.

The post ABC NewsRadio: Earth Day Summit appeared first on Kevin Rudd.

Worse Than FailureError'd: When Words Collide

Waiting for his wave to collapse, surfer Mike I. harmonizes "Nice try Power BI, but you're not doing quantum computing quite right." Or, does he?

schrodinger

 

Finn Antti, caught in a hella Catch-22, explains " Apparently I need to install Feedback Hub from Microsoft Store to tell Microsoft that I can't install Feedback Hub from Microsoft Store."

microsoft

 

Our old friend Pascal shares "Coupon codes don't work very well when they are URL encoded."

homedepot

 

Uninamed pydsigner has a strong meme game. "It's bad enough when your fairly popular meme creation site runs out of storage, but to be unable to serve your pictures as a result? The completely un-obfuscated stacktrace is just insult to the injury. "

stack

 

But the submission from Brad W. wins this week's prize . Says he: "The vehicle emissions site (linked directly from the state site) isn't handling the increased traffic well, but their error handling is superb. An online code browser allowing for complete examination of the entire stack and surroundings."

emissions

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianDirk Eddelbuettel: drat 0.2.0: Now with ‘docs/’

drat user

A new release of drat arrived on CRAN today. This is the first release in a few months (with the last release in July of last year) and it (finally) makes the leap to supporting docs/ in the main branch as we are all so tired of the gh-pages branch. We also have new vignettes, new (and very shiny) documentation and refreshed vignettes!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Or as we may now add: stay away from semi-random universes snapshots too.

Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by (now) six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.0 (2021-04-21)

  • A documentation website for the package was added at https://eddelbuettel.github.io/drat/ (Dirk)

  • The continuous integration was switched to using ‘r-ci’ (Dirk)

  • The docs/ directory of the main repository branch can now be used instead of gh-pages branch (Dirk in #112)

  • A new repository https://github.com/drat-base/drat can now be used to fork an initial drat repository (Dirk)

  • A new vignette “Drat Step-by-Step” was added (Roman Hornung and Dirk in #117 fixing #115 and #113)

  • The test suite was refactored for docs/ use (Felix Ernst in #118)

  • The minimum R version is now ‘R (>= 3.6)’ (Dirk fixing #119)

  • The vignettes were switched to minidown (Dirk fixing #116)

  • A new test file was added to ensure ‘NEWS.Rd’ is always at the current release version.

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kevin RuddAFR: Mining Super-Profits Levy

The University of Western Australia


MEDIA STATEMENT
21 APRIL 2021

Published in the Australian Financial Review on 22 April 2021

“As was the case during the last resources boom, and the one before that, the super-profits earned by a handful of resource majors in this country are a giant rip-off of the Australian people.

“Furthermore, the greed of these three is unbelievable: they haven’t even bothered to establish serious, large scale charitable foundations to benefit the Australian people at the scale that other serious global firms do. And in Rio’s case, they dynamite indigenous heritage in the way through.

“I fully understand the financial investment needed for long term projects. But nowhere in their long term financial planning did any company forecast prices at this level. That’s why the Australian people, who actually own these resources and merely lease them to these companies, deserve a higher return.

“That’s why I believe these three majors should pay a super-profits levy into a national investment fund to underpin the future of Australian higher education and research, because this is the sector that will need to generate the next tranche of national wealth. We need Australian equity in the global technology revolution now underway, where we are in danger of owning none of the intellectual property and assets that will drive future global growth.”

Ends

The post AFR: Mining Super-Profits Levy appeared first on Kevin Rudd.

Planet DebianRussell Coker: HP MP350P Gen8

I’m playing with a HP Proliant ML350P Gen8 server (part num 646676-011). For HP servers “ML” means tower (see the ProLiant Wikipedia page for more details [1]). For HP servers the “generation” indicates how old the server is, Gen8 was announced in 2012 and Gen10 seems to be the current generation.

Debian Packages from HP

wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP RAID" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ buster/current non-free" >> /etc/apt/sources.list

The above commands will setup the APT repository for Debian/Buster. See the HP Downloads FAQ [2] for more information about their repositories.

hponcfg

This package contains the hponcfg program that configures ILO (the HP remote management system) from Linux. One noteworthy command is “hponcfg -r” to reset the ILO, something you should do before selling an old system.

ssacli

This package contains the ssacli program to configure storage arrays, here are some examples of how to use it:

# list controllers and show slot numbers
ssacli controller all show
# list arrays on controller identified by slot and give array IDs
ssacli controller slot=0 array all show
# show details of one array
ssacli controller slot=0 array A show
# show all disks on one controller
ssacli controller slot=0 physicaldrive all show
# show config of a controller, this gives RAID level etc
ssacli controller slot=0 show config
# delete array B (you can immediately pull the disks from it)
ssacli controller slot=0 array B delete
# create an array type RAID0 with specified drives, do this with one drive per array for BTRFS/ZFS
ssacli controller slot=0 create type=arrayr0 drives=1I:1:1

When a disk is used in JBOD mode just under 33MB will be used at the end of the disk for the RAID metadata. If you have existing disks with a DOS partition table you can put it in a HP array as a JBOD and it will work with all data intact (GPT partition table is more complicated). When all disks are removed from the server the cooling fans run at high speed, this would be annoying if you wanted to have a diskless workstation or server using only external storage.

ssaducli

This package contains the ssaducli diagnostic utility for storage arrays. The SSD “wear gauge report” doesn’t work for the 2 SSDs I tested it on, maybe it only supports SAS SSDs not SATA SSDs. It doesn’t seem to do anything that I need.

storcli

This package contains both 32bit and 64bit versions of the MegaRAID utility and deletes whichever one doesn’t match the installation in the package postinst, so it fails debsums checks etc. The MegaRAID utility is for a different type of RAID controller to the “Smart Storage Array” (AKA SSA) that the other utilities work with. As an aside it seems that there are multiple types of MegaRAID controller, the management program from the storcli package doesn’t work on a Dell server with MegaRAID. They should have made separate 32bit and 64bit versions of this package.

Recommendations

Here is HP page for downloading firmware updates (including security updates) [3], you have to login first and have a warranty. This is legal but poor service. Dell servers have comparable prices (on the second hand marker) and comparable features but give free firmware updates to everyone. Dell have overall lower quality of Debian packages for supporting utilities, but a wider range of support so generally Dell support seems better in every way. Dell and HP hardware seems of equal quality so overall I think it’s best to buy Dell.

Suggestions for HP

Finding which of the signing keys to use is unreasonably difficult. You should get some HP employees to sign the HP keys used for repositories with their personal keys and then go to LUG meetings and get their personal keys well connected to the web of trust. Then upload the HP keys to the public key repositories. You should also use the same keys for signing all versions of the repositories. Having different keys for the different versions of Debian wastes people’s time.

Please provide firmware for all users, even if they buy systems second hand. It is in your best interests to have systems used long-term and have them run securely. It is not in your best interests to have older HP servers perform badly.

Having all the fans run at maximum speed when power is turned on is a standard server feature. Some servers can throttle the fan when the BIOS is running, it would be nice if HP servers did that. Having ridiculously loud fans until just before GRUB starts is annoying.

Worse Than FailureCodeSOD: Saved Changes

When you look at bad code, there's a part of your body that reacts to it. You can just feel it, in your spleen. This is code you don't want to maintain. This is code you don't want to see in your code base.

Sometimes, you get that reaction to code, and then you think about the code, and say: "Well, it's not that bad," but your spleen still throbs, because you know if you had to maintain this code, it'd be constant, low-level pain. Maybe you ignore your spleen, because hey, a quick glance, it doesn't seem that bad.

But your spleen knows. A line that seems bad, but mostly harmless, can suddenly unfurl into something far, far nastier.

This example, from Rikki, demonstrates:

private async void AttemptContextChange(bool saveChanges = true) { if (m_Context != null) { if (saveChanges && !SaveChanges()) { // error was already displayed to the user, just stop } else { dataGrid.ItemSource = null; m_Context.Dispose(); } } }

if (saveChanges && !SaveChanges()) is one of those lines that crawls into your spleen and just sits there. My brain tried to say, "oh, this is fine, SaveChanges() probably is just a validation method, and that's why the UI is already up to date, it's just a bad name, it should be CanSaveChanges()" . But if that's true, where does it perform the actual save? Nowhere here. My brain didn't want to see it, but my spleen knew.

If you ignore your spleen and spend a second thinking, it more or less makes sense: saveChanges (the parameter) is a piece of information about this operation- the user would like to save their changes. SaveChanges() the method probably attempts to save the changes, and returns a boolean value if it succeeded.

But wait, returning boolean values isn't how we communicate errors in a language like C#. We can throw exceptions! SaveChanges() should throw an exception if it can't proceed.

Which, speaking of exceptions, we need to think a little bit about the comment: // error was already displayed to the user, just stop.

This comment contains a lot of information about the structure of this program. SaveChanges() attempts to do the save, it catches any exceptions, and then does the UI updates, completely within its own flow. That simple method call conceals a huge amount of spaghetti code.

Sometimes, code doesn't look terrible to your brain, but when you feel its badness in your spleen, listen to it. Spleen-oriented Programming is where you make sure none of the code you have to touch makes your spleen hurt.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianShirish Agarwal: The Great Train Robbery

I had a twitter fight few days back with a gentleman and the article is a result of that fight. Sadly, I do not know the name of the gentleman as he goes via a psuedo name and then again I’ve not taken permission from him to quote him in either way. So I will just state the observations I was able to make from the conversations we had. As people who read this blog regularly would know, I am and have been against Railway Privatization which is happening in India. And will be sharing some of the case studies from other countries as to how it panned out for them.

UK Railways


How Privatization Fails : Railways

The Above video is by a gentleman called Shaun who basically shared that privatization as far as UK is concerned is nothing but monopolies and while there are complex reasons for the same, the design of the Railways is such that it will always be a monopoly structure. At the most what you can do is have several monopolies but that is all that can happen. The idea of competition just cannot happen. Even the idea that subsidies will be less or/and trains will run on time is far from fact. Both of these facts have been checked and found to be truthful by fullfact.org. It is and argued that UK is small and perhaps it doesn’t have the right conditions. It is probably true but still we do deserve to have a glance at the UK railway map.

UK railway map with operatorsUK railway map with operators

The above map is copyrighted to Map Marketing where you could see it today . As can be seen above most companies had their own specified areas. Now if you had looked at the facts then you would have seen that UK fares have been higher. In fact, an oldish article from Metro (a UK publication) shares the same. In fact, UK nationalized its railways effectively as many large rail operators were running in red. Even Scotland is set to nationalised back in March 2022. Remember this is a country which hasn’t seen inflation go upwards of 5% in nearly a decade. The only outlier was 2011 where they indeed breached the 5% mark. So from this, what we see is ‘Private Gains’ and “Private Gains Public Losses’ perhaps seem fit. But then maybe we didn’t use the right example. Perhaps Japan would be better. They have bullet trains while UK is still thinking about it. (HS2).

Japanese Railway

Below is the map of Japanese Railway

Railway map of Japan with ‘private ownership’ – courtesy Wikimedia commons

Japan started privatizing its railway in 1987 and to date it has not been fully privatized. And on top of it, amount as much as ¥24 trillion of the long-term JNR debt was shouldered by the government at the expense of taxpayers of Japan while also reducing almost 1/4th of it employees. To add to it, while some parts of Japanese Railways did make profits, many of them made profits by doing large-scale non-railway business mostly real estate of land adjacent to railway stations. In many cases, it seems this went all the way up to 60% of the revenue. The most profitable has been the Shinkansen though. And while it has been profitable, it has not been without safety scandals over the years, the biggest in recent years was the 2005 Amagasaki derailment. What was interesting to me was the Aftermath, while the Wikipedia page doesn’t share much, I had read at the time and probably could be found how a lot of ordinary people stood up to the companies in a country where it is a known fact that most companies are owned by the Yakuza. And this is a country where people are loyal to their corporation or company no matter what. It is a strange culture to west and also here in India where people change jobs on drop of hat, although nowadays we have record unemployment. So perhaps Japan too does not meet our standard as it doesn’t do competition with each other but each is a set monopoly in those regions. Also how much subsidy is there or not is not really transparent.

U.S. Railways

Last, but not the least I share the U.S. Railway map. This is provided by A Mr. Tom Alison on reddit on channel maporn. As the thread itself is archived and I do not know the gentleman concerned, nor have taken permission for the map, hence sharing the compressed version –


U.S. Railway lines with the different owners

Now the U.S. Railways is and has always been peculiar as unlike the above two the U.S. has always been more of a freight network. Probably, much of it has to do that in the 1960’s when oil was cheap, the U.S. made zillions of roadways and romanticized the ‘road trip’ and has been doing it ever since. Also the creation of low-cost airlines definitely didn’t help the railways to have more passenger services, in fact the opposite.

There are and have been smaller services and attempts of privatization in both New Zealand and Australia and both have been failures. Please see papers in that regard. My simple point is this, as can be seen above, there have been various attempts at privatization of railways and most of them have been a mixed bag. The only one which comes close to what we think as good is Japanese but that also used a lot of public debt which we don’t know what will happen on next. Also for higher-speed train services like a bullet train or whatever, you need to direct, no hair pen bends. In fact, a good talk on the topic is the TBD podcast which while it talks about hyperloop, the same questions is and would be asked if were to do in India. Another thing to be kept in mind is that the Japanese have been exceptional builders and this is because they have been forced to. They live in a seismically active zone which made Fukushima disaster a reality but at the same time, their buildings are earthquake-resistant.

Standard Disclaimer – The above is a simplified version of things. I could have added in financial accounts but that again has no set pattern. For e.g. some Railways use accrual, some use cash and some use hybrid. I could have also shared in either the guage or electrification but all have slightly different standards, although uniguage is something that all Railways aspire for and electrification is again something that all Railways want although in many cases it just isn’t economically feasible.

Indian Railways

Indian Railways itself recently made the move from Cash to Accrual couple of years back. In-between for a couple of years, it was hybrid. The sad part is and was you can now never measure against past performance in the old way because it is so different. Hence, whether the Railways will be making a loss or a profit, we would come to know only much later. Also, most accountants don’t know the new system well, so it is gonna take more time, how much unknown. Sadly, what GOI did a few years back is merge the Railway budget into the Union Budget. Of course, the excuse they gave is too many pressures of new trains, while the truth is, by doing this, they decreased transparency about the whole thing. For e.g. for the last few years, the only state which had significant work being done is in U.P. (Uttar Pradesh) and a bit in Goa, although that is has been protested time and again. I being from the neighborly state of Maharashtra , and have been there several times. Now it does feels all like a dream, going to Goa :(.

Covid news

Now before I jump on the news, I should share the movie ‘Virus’ (2019) which was made by the talented Aashiq Abu. Even though, am not a Malayalee, I still have enjoyed many of his movies simply because he is a terrific director and Malayalam movies, at least most of them have English subtitles and lot of original content.. Interestingly, unlike the first couple of times when I saw it a couple of years back. The first time I saw it, I couldn’t sleep a wink for a week. Even the next time, it was heavy. I had shared the movie with mum, and even she couldn’t see it in one go. It is and was that powerful Now maybe because we are headlong in the pandemic, and the madness is all around us. There are two terms that helped me though understand a great deal of what is happening in the movie, the first term was ‘altered sensorium’ which has been defined here. The other is saturation or to be more precise ‘oxygen saturation‘. This term has also entered the Indian twitter lexicon quite a bit as India has started running out of oxygen. Just today Delhi High Court did an emergency hearing on the subject late at night. Although there is much to share about the mismanagement of the center, the best piece on the subject has been by Miss Priya Ramani. Yup, the same lady who has won against M.J. Akbar and this is when Mr. Akbar had 100 lawyers for this specific case. It would be interesting to see what happens ahead.

There are however few things even she forgot in her piece, For e.g. reverse migration i.e. from urban to rural migration started again. Two articles from different entities sharing a similar outlook.Sadly, the right have no empathy or feeling for either the poor or the sick. Even the labor minister Santosh Gangwar’s statement that around 1.04 crores were the only people who walked back home. While there is not much data, however some work/research has been done on migration to cites that the number could be easily 10 times as much. And this was in the lockdown of last year. This year, again the same issue has re-surfaced and migrants learning lessons started leaving cities. And I’m ashamed to say I think they are doing the right thing. Most State Governments have not learned lessons nor have they done any work to earn the trust of migrants. This is true of almost all state Governments. Last year, just before the lockdown was announced, me and my friend spent almost 30k getting a cab all the way from Chennai to Pune, how much we paid for the cab, how much we bribed the various people just so we could cross the state borders to return home to our anxious families. Thankfully, unlike the migrants, we were better off although we did make a loss. I probably wouldn’t be alive if I were in their situation as many didn’t. That number is still in the air �undocumented deaths’ 😦

Vaccine issues

Currently, though the issue has been the Vaccine and the pricing of the same. A good article to get a summation of the issues outlined has been shared on Economist. Another article that goes to the heart of the issue is at scroll. To buttress the argument, the SII chairman had shared this few weeks back –

Adar Poonawala talking to Vishnu Som on Left, right center, 7th April 2021.

So, a licensee manufacturer wants to make super-profits during the pandemic. And now, as shared above they can very easily do it. Even the quotes given to nearby countries is smaller than the quotes given to Indian states –

Prices of AstraZeneca among various states and countries.

The situation around beds, vaccines, oxygen, anything is so dire that people could go to any lengths to save their loved ones. Even if they know if a certain medicine doesn’t work. For e.g. Remdesivir, 5 WHO trials have concluded that it doesn’t increase mortality. Heck, even AIIMS chief said the same. But both doctors and relatives desperation to cling on hope has made Remdesivir as a black market drug with unoffical prices hovering anywhere between INR 14k/- to INR30k/- per vial. One of the executives of a top firm was also arrested in Gujarat. In Maharashtra, the opposition M.P. came to the ‘rescue‘ of the officials of Bruick pharms in Mumbai.

Sadly, this strange affliction to the party in the center is also there in my extended family. At one end, they will heap praise on Mr. Modi, at the same time they can’t get wait to get fast out of India. Many of them have settled in horrors of horror Dubai, as it is the best place to do business, get international schools for the young ones at decent prices, cheaper or maybe a tad more than what they paid in Delhi or elsewhere. Being an Agarwal or a Gupta makes it easier to compartmentalize both things. Ease of doing business, 5 days flat to get a business registered, up and running. And the paranoia is still there. They won’t talk on the phone about him because they are afraid they may say something which comes back to bite them. As far as their decision to migrate, can’t really blame them. If I were 20-25 yeas younger and my mum were in a better shape than she is, we probably would have migrated as well, although would have preferred Europe than anywhere else.

Internet Freedom and Aarogya Setu App.


Internet Freedom had shared the chilling effects of the Aarogya Setu App. This had also been shared by FSCI in the past, and recently had their handle being banned on Twitter. This was also apparent in a legal bail order which the high court judge gave. While I won’t go into the merits and demerits of the bail order, it is astounding for the judge to say that the accused, even though he would be on bail install an app. so he can be surveilled. And this is a high court judge, such a sad state of affairs. We seem to be putting up new lows every day when it comes to judicial jurisprudence. One interesting aspect of the whole case was shared by Aishwarya Iyer. She shared a story that she and her team worked on quint which raises questions on the quality of the work done by Delhi Police. This is of course, up to Delhi Police to ascertain the truth of the matter because unless and until they are able to tie in the PMO’s office in for a leak or POTUS’s office it hardly seems possible. For e.g. the dates when two heads of state can meet each other would be decided by the secretaries of the two. Once the date is known, it would be shared with the press while at the same time some sort of security apparatus would kick in place. It is incumbent, especially on the host to take as much care as he can of the guest. We all remember that World War 1 (the war to end all wars) started due to the murder of Archduke Ferdinand.

As nobody wants that, the best way is to make sure that a political murder doesn’t happen on your watch. Now while I won’t comment on what it would be, it would be safe to assume that it would be z+ security along with higher readiness. Especially if it as somebody as important as POTUS. Now, it would be quite a reach for Delhi Police to connect the two dates. They either will have to get creative with the dates or some other way. Otherwise, with practically no knowledge in the public domain, they can�t work in limbo. In either case, I do hope the case comes up for hearing soon and we see what the Delhi Police says and contends in the High Court about the same. At the very least, it would be irritating for them to talk of the dates unless they can contend some mass conspiracy which involves the PMO (and would bring into question the constant vetting done by the Intelligence dept. of all those who work in PMO). And this whole case is to kind of shelter to the Delhi riots which happened in which majorly the Muslims died but their deaths lay unaccounted till date 😦

Conclusion

In Conclusion, I would like to share a bit of humor because right now the atmosphere is humorless, both with authoritarian tendencies of the Central Govt. and the mass mismanagement of public health which they now have left to the state to do as they fit. The peice I am sharing is from arre, one of my goto sites whenever I feel low.

,

Planet DebianEnrico Zini: Python output buffering

Here's a little toy program that displays a message like a split-flap display:

#!/usr/bin/python3

import sys
import time

def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
            break
        time.sleep(0.09)
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))
    print()

message = " ".join(sys.argv[1:])
display(message.upper())

This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation.

What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient.

I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed.

Controlling buffering in Python

In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent.

Python's print() function also supports flush=True as an optional argument:

    print(cur, end="\r", flush=True)

If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:

sys.stdout.reconfigure(line_buffering=True)

Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):

import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)

If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

Cryptogram On North Korea’s Cyberattack Capabilities

Excellent New Yorker article on North Korea’s offensive cyber capabilities.

Cryptogram Backdoor Found in Codecov Bash Uploader

Developers have discovered a backdoor in the Codecov bash uploader. It’s been there for four months. We don’t know who put it there.

Codecov said the breach allowed the attackers to export information stored in its users’ continuous integration (CI) environments. This information was then sent to a third-party server outside of Codecov’s infrastructure,” the company warned.

Codecov’s Bash Uploader is also used in several uploaders — Codecov-actions uploader for Github, the Codecov CircleCl Orb, and the Codecov Bitrise Step — and the company says these uploaders were also impacted by the breach.

According to Codecov, the altered version of the Bash Uploader script could potentially affect:

  • Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
  • Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
  • The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.

Add this to the long list of recent supply-chain attacks.

Planet DebianSven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you

  • use ssl (ok that should be almost everyone)
  • and you would like to execute doveadm as a user, who can not read the ssl cert and keys (quite likely).

I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.

2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied

You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:

cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf

Discussed upstream here.

Kevin RuddThe Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests

By Kevin Rudd and Malcolm Turnbull

It was always expected that Joe Biden’s election would be a massive shot in the arm for international climate action, but the scale of that boost has been genuinely surprising.

The new president has now invited 40 world leaders to a virtual climate change summit coinciding with Earth Day this Thursday. China’s Xi Jinping will be there, following productive face-to-face talks last week between Biden’s climate envoy, John Kerry, and his Chinese counterpart, Xie Zhenhua, in Shanghai. Even Vladimir Putin is attending, despite divisions between Washington and the Russian leader over new sanctions.

Japan, South Korea and Canada are all expected to announce new medium-term 2030 emissions reduction plans this week, after earlier refusing to do so. Even China – the world’s largest emitter – last week signalled they may also be prepared to do more this decade above and beyond commitments they made at the end of last year.

Our country, however, continues to bury its head in the sand, despite the fact that Australia remains dangerously at risk of the economic and environmental consequences that will come from the climate crisis barrelling towards us.

Prime minister Scott Morrison’s refusal to adopt both a firm timeline to reach net zero emissions and to increase its own interim 2030 target leaves us effectively isolated in the western world. It also goes against what we signed up to through the Paris agreement – which both our governments worked so hard to secure.

According to our independent Climate Change Authority (CCA) and the Australian Energy Market Operator (Aemo), not only should Australia be doing much more as “our fair share” towards global efforts to reduce emissions, but importantly we also now have the capacity to do more.

The reality is Australia’s current target, set in 2015, to reduce emissions by 26 to 28% on 2005 levels by 2030 is now woefully inadequate – and was always intended to be updated this year. The Obama administration had exactly the same target as Australia, but aimed to achieve it five years earlier than us, which in reality made it much more ambitious than ours. And this week, the Biden administration is expected to announce a new 2030 pledge twice as deep as Australia’s current effort. This will set a new global litmus test for Australia’s own ambition, which as the CCA has said should be at least a 45% cut by 2030.

But, as two former prime ministers representing our nation’s centre-left and centre-right parties, the world shouldn’t give up hope on our country just yet. Thankfully, there is some cause for optimism. Our sun-drenched country has the highest per capita penetration of rooftop solar in the world. And with the right approach, Aemo has said that renewables could go from providing a quarter of electricity market demand on our populous eastern seaboard today to 75% in less than five years. The fact we are in a position to even be able to seize this technological opportunity is in large part due to the introduction in 2009 of a 20% clean renewable energy target for 2020 and the launching of the largest renewable clean energy project in our nation’s history (Snowy Hydro 2.0) by our respective governments.

The national consensus for climate action in Australia has also shifted markedly in recent years. Every state and territory government is now committed to net zero emissions, so too are our peak industry, business and agriculture groups, as well as our national airline, and even our largest mining company.

The main thing holding back Australia’s climate ambition is politics: a toxic coalition of the Murdoch press, the right wing of the Liberal and National parties, and vested interests in the fossil fuel sector.

Sadly, instead of seizing this technological opportunity and embracing this newfound national consensus, the government remains hell-bent on a “gas-fired recovery” from Covid-19. Old coal plants still generate around 75% of Australia’s electricity. But these are being replaced by renewables plus storage because they are a cheaper form of generation than the alternatives on offer.

Gas has a role to play in the transition, but that role is to steadily diminish as renewables continue to grow. To bet big on the future role of gas is to bet against the best engineering and economic advice coming out of Aemo, and to ignore the scientific advice that more gas in the grid will simply lead to more emissions. The only long-term gas-fired future we should be planning is green hydrogen made by electrolysing water with renewable energy.

Australia may be able to get away with showing up empty-handed to this week’s summit, but will find it even more difficult to do as a special guest of the British at the G7 leaders’ summit in June. We would be the only developed country in the room that is not committed to net zero by 2050. And we will find it even harder again to show up empty-handed at the COP26 Climate Conference in Glasgow at the end of the year, given more than 100 countries in the world have pledged to increase their ambition.

There are also consequences for this inaction.

As the rolling apocalypse of fires and floods in our country demonstrates, Australia is on the global frontline of this climate crisis. Last year’s wildfires claimed dozens of lives, destroyed thousands of homes, wiped out billions of animals, and cost billions of dollars.

With more than 70% of Australia’s trade now with countries committed to net zero, the prospect of carbon border taxes being introduced – beginning with the European Union – also leaves us economically exposed. So too does our continued faith in coal as a leading export commodity, especially with many of the 50 proposed new coalmines in Australia already struggling to attract finance. Instead of expanding coal, we should be increasing our support for ground-breaking projects such as the Asian Renewable Energy Hub in the Pilbara region which could allow us to become a green hydrogen supplier for Asia’s clean energy transition. There are also promising new hydrogen projects planned for Queensland centred on Gladstone, a traditional coal port. Building dozens of new coalmines won’t set Australia up for the future; it will lock us into the past.

Australians like to think we “punch above our weight” on the global stage. We certainly do when we come to climate change: we emit more than 40 other countries with larger populations, and our per capita emissions are the highest of any advanced economy. This is not a record we should be proud of at all.

It’s often fatuously claimed that what countries like Australia do make no difference to the global climate because we account for only about 1.2% of emissions. The reality is that Australia is the third-largest fossil fuel exporter in the world. Our own environment is especially vulnerable to global warming as the recent massive bushfires demonstrated. Our economy is also vulnerable to the transition away from fossil fuels. Denial of the reality of global warming and the need to transition to a prosperous clean energy economy is abandoning our responsibilities as much to Australian workers as it is to the world.

Hopefully, at this week’s summit the prime minister will receive the wake-up call the government needs. In the meantime, the rest of the world should not give up on us yet. If our country’s last decade has demonstrated anything – with five prime ministers in just eight years – it’s that political winds can change very quickly.

Kevin Rudd, from the Australian Labor party, was Australia’s prime minister between 2007 and 2010, and again in 2013. Malcolm Turnbull, from the Liberal party of Australia, was prime minister between 2015 and 2018.

First published in The Guardian

Image:

The post The Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests appeared first on Kevin Rudd.

Worse Than FailureNews Roundup: Single Point of Fun

Let’s quickly recap the past three news roundups:

  1. Flash’s effect on web user experience
  2. Adding every requirement as a feature in a computer*
  3. A terrible UI that cost $900 million

At first glance it appears that poorly thought-through user experience is my sole fascination. But when the Suez Canal blockage story from March kept my full attention for nearly 10 days, I realized that my real fascination is the unintended consequences of poorly thought-through user experiences. Sometimes the poor user experience is relatively minor enough that a new protocol can be developed (in the case of Flash) or an anxiety-inducing technology gets made (in the case of the Expanscape).

But when all risks of the current user experience aren’t considered, then there are real financial consequences - just like in the case of the Suez Canal where one ship, the Ever Given, blocked 10% of global trade. The fact so much traffic comes through the canal makes it a very important single point of failure. (In case anyone wasn’t paying attention to global shipping news a few weeks ago, a large container ship piloted itself into the side of the canal. The ship is so famous to now have its own Wikipedia page, where it’s been reported that the now-unstuck ship has been fined $916 million - $300 of which is for “loss of reputation”.) So maybe my thesis needs to be amended to: the unintended consequences of poorly thought-through user experiences due to single points of failure. (It’s a mouthful, but it feels right.)

There’s the story of Mizuho Bank, whose ATMs started eating customer cards after some routine data migration work caused country-wide system malfunctions. Single point of failure: The IT team’s risk management process.

There’s the story of Ubiquiti, whose data breach in January was a lot more...relatable after a whistleblower complaint. Single point of failure: Password managers. (They’re not as secure when you leave the front door open.)

The anonymous whistleblower alleges that the statement was written in such a way to imply that the vulnerability was on the third party and that Ubiquiti was impacted by that. Among other things, the whistleblower alleges that the hacker(s) were able to target the system by acquiring privileged credentials from a Ubiquiti employee’s LastPass account.

And then there’s the story of Netflix, who is trying to sever the only remaining way I leech off of my parents. Single point of failure: family.

Citi equity analyst Jason Bazinet said that password sharing costs U.S. streaming companies $25 billion annually in lost revenue, and Netflix owns about 25% of that loss.

Perhaps the final example doesn't seem as critical as the first two, but it's not your Netflix access at stake.

Single points of failure are fascinating to me because, as it gets easy to be complacent about dealing with these vulnerabilities as their value increases and no catastrophes arise. I hope to use this space to keep reacting to, and perhaps even being proactive about, technical and operational single point of failure stories that I found.


Quick hits:

*As an addendum to my story, Nature Magazine published a study that shows that “people are more likely to consider solutions that add features than solutions that remove them, even when removing features is more efficient”.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.11: Several Updates

A new version 0.3.11 of Rblpapi is now arriving at CRAN. It comes two years after the release of version Rblpapit 0.3.10 and brings a few updates and extensions.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the eleventh release since the package first appeared on CRAN in 2016. Changes are detailed below. Special thanks to James, Maxime and Michael for sending us pull requests.

Changes in Rblpapi version 0.3.11 (2021-04-20)

  • Support blpAutoAuthenticate and B-PIPE access, refactor and generalise authentication (James Bell in #285)

  • Deprecate excludeterm (John in #306)

  • Correct example in README.md (Maxime Legrand in #314)

  • Correct bds man page (and code) (Michael Kerber, and John, in #320)

  • Add GitHub Actions continuous integration (Dirk in #323)

  • Remove bashisms detected by R CMD check (Dirk #324)

  • Switch vignette to minidown (Dirk in #331)

  • Switch unit tests framework to tinytest (Dirk in #332)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityNote to Self: Create Non-Exhaustive List of Competitors

What was the best news you heard so far this month? Mine was learning that KrebsOnSecurity is listed as a restricted competitor by Gartner Inc. [NYSE:IT] — a $4 billion technology goliath whose analyst reports can move markets and shape the IT industry.

Earlier this month, a reader pointed my attention to the following notice from Gartner to clients who are seeking to promote Gartner reports about technology products and services:

What that notice says is that KrebsOnSecurity is somehow on Gartner’s “non exhaustive list of competitors,” i.e., online venues where technology companies are not allowed to promote Gartner reports about their products and services.

The bulk of Gartner’s revenue comes from subscription-based IT market research. As the largest organization dedicated to the analysis of software, Gartner’s network of analysts are well connected to the technology and software industries. Some have argued that Gartner is a kind of private social network, in that a significant portion of Gartner’s competitive position is based on its interaction with an extensive network of software vendors and buyers.

Either way, the company regularly serves as a virtual kingmaker with their trademark “Magic Quadrant” designations, which rate technology vendors and industries “based on proprietary qualitative data analysis methods to demonstrate market trends, such as direction, maturity and participants.”

The two main subjective criteria upon which Gartner bases those rankings are “the ability to execute” and “completeness of vision.” They also break companies out into categories such as “challengers,” “leaders,” “visionaries” and “niche players.”

Gartner’s 2020 “Magic Quadrant” for companies that provide “contact center as a service” offerings.

So when Gartner issues a public report forecasting that worldwide semiconductor revenue will fall, or that worldwide public cloud revenue will grow, those reports very often move markets.

Being listed by Gartner as a competitor has had no discernable financial impact on KrebsOnSecurity, or on its reporting. But I find this designation both flattering and remarkable given that this site seldom promotes technological solutions.

Nor have I ever offered paid consulting or custom market research (although I did give a paid keynote speech at Gartner’s 2015 conference in Orlando, which is still by far the largest crowd I’ve ever addressed).

Rather, KrebsOnSecurity has sought to spread cybersecurity awareness primarily by highlighting the “who” of cybercrime — stories told from the perspectives of both attackers and victims. What’s more, my research and content is available to everyone at the same time, and for free.

I rarely do market predictions (or prognostications of any kind), but in deference to Gartner allow me to posit a scenario in which major analyst firms start to become a less exclusive and perhaps less relevant voice as both an influencer and social network.

For years I have tried to corrupt more of my journalist colleagues into going it alone, noting that solo blogs and newsletters can not only provide a hefty boost from newsroom income, but they also can produce journalism that is just as timely, relevant and impactful.

Those enticements have mostly fallen on deaf ears. Recently, however, an increasing number of journalists from major publications have struck out on their own, some in reportorial roles, others as professional researchers and analysts in their own right.

If Gartner considers a one-man blogging operation as competition, I wonder what they’ll think of the coming collective output from an entire industry of newly emancipated reporters seeking more remuneration and freedom offered by independent publishing platforms like Substack, Patreon and Medium.

Oh, I doubt any group of independent journalists would seek to promulgate their own Non-Exclusive List of Competitors at Whom Thou Shalt Not Publish. But why should they? One’s ability to execute does not impair another’s completeness of vision, nor vice versa. According to Gartner, it takes all kinds, including visionaries, niche players, leaders and challengers.

Cryptogram When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­ — and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI — human engineers programmed a regular computer to cheat — but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals — known in AI as objective functions — need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback — did you win or lose? — are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more — and more important — decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens — let alone thousands — of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com

Worse Than FailureCodeSOD: Universal Problems

Universally Unique Identifiers are a very practical solution to unique IDs. With 10^30 possible values, the odds of having a collision are, well, astronomical. They're fast enough to generate, random enough to be unique, and there are so many of them that- well, they may not be universally unique through all time, but they're certainly unique enough.

Right?

Krysk's predecessor isn't so confident.

key = uuid4() if(key in self.unloadQueue): # it probably couldn't possibly collide twice right? # right guys? :D key = uuid4() self.unloadQueue[key] = unloaded

The comments explain the code, but leave me with so many more questions. Did they actually have a collision in the past? Exactly how many entries are they putting in this unloadQueue? The plausible explanation is that the developer responsible was being overly cautious. But… were they?

Krysk writes: "Some code in our production server software. Comments like these are the stuff of nightmares for maintenance programmers."

I don't know about nightmares, but I might lose some sleep puzzling over this.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram Biden Administration Imposes Sanctions on Russia for SolarWinds

On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.

I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. Also, that “the new measures are intended to have a noticeable effect on the Russian economy.” Honestly, I don’t know what the US should do. Anything that feels more proportional is also more escalatory. I’m sure that dilemma is part of the Russian calculus in all this.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 03)

This week on my podcast, part three of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Planet DebianRitesh Raj Sarraf: Catching Up Your Sources

I’ve mostly had the preference of controlling my data rather than depend on someone else. That’s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I’d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data.

Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication

So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently.

But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs.

Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized.

Over all this time, I’m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That’s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out.

So the reality is that while you may not be valuating the data you offer in exchange correctly, there’s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data

I’m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats.

The next best would be plain text.

In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I’ve come across hundreds of such content that I’d always like to preserve.

Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies.

There’s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is

On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don’t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call.

But still, the information retention techniques were better.

I haven’t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we’d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.

Dominance has the power to rewrite history and unfortunately that’s true and it has done its part. It is just that in a mere human’s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I’m here now, and this is not the reality.

And if not dominance, there’s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there’s no way one can go back in time and produce a fine evidence.

There’s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam’s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren’t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we’ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire.

But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift

So, in my opinion, having events documented is important. It’d be nice to have skills documented too so that it can be passed over generations but that’s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished.

A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance.

So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today’s age. If I really had the resources, I’d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS

So for my communication, I like to prefer emails over any other means. That doesn’t mean I don’t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format.

Such is the case that for information useful over the internet, I crave to have it formatted in email for archival.

RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay.

But my preference is still RSS.

So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well.

feed2imap is:

  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment

In a gist, it makes the online content available to me offline in the most admired email format

In my mail box, in today’s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below

The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

Trends have shown. Data mobility is a commodity expectation now. As such, I wanted to have something fill in that gap for me. So that I could access my information - which I’ve preferred in a format - easily in today’s trendy options.

I tried multiple options on my current preferred platform of choice for mobiles, i.e. Android. Finally I came across Aqua Mail, which fits in most of my requirements.

Aqua Mail does

  • Connect to my laptop over imap
  • Can sync the requested folders
  • Can sync requested data for offlince accessibility
  • Can present the synchronized data in a quite flexible and customizable manner, to match my taste
  • Has a very extensible User Interface, allowing me to customize it to my taste

Pictures can do a better job of describing my english words.

All of this done with no dependence on network connectivity, post the sync. And all information is stored is best possible simple format.

Worse Than FailureCodeSOD: Maximum Max

Imagine you were browsing a C++ codebase and found a signature in a header file like this:

int max (int a, int b, int c, int d);

Now, what do you think this function does? Would your theories change if I told you that this was just dropped in the header for an otherwise unrelated class file that doesn't actually use the max function?

Let's look at the implementation, supplied by Mariette.

int max (int a, int b, int c, int d) { if (c == d) { // Do nothing.. } if (a >= b) { return a; } else { return b; } }

Now, I have a bit of a reputation of being a ternary hater, but I hate bad ternaries. Every time I write a max function, I write it with a ternary. In that case, it's way more readable, and so while I shouldn't fault the closing if statement in this function, it annoys me. But it's not the WTF anyway.

This max function takes four parameters, but only actually uses two of them. The //Do nothing.. comment is in the code, and that first if statement is there specifically because if it weren't, the compiler would throw warnings about unused parameters.

Those warnings are there for a reason. I suspect someone saw the warning, and contemplated fixing the function, but after seeing the wall of compiler errors generated by changing the function signature, chose this instead. Or maybe they even went so far as to change the behavior, to make it find the max of all four, only to discover that tests failed because there were methods which depended on it only checking the first two parameters.

I'm joking. I assume there weren't any tests. But it did probably crash when someone changed the behavior. Fortunately, no one had used the method expecting it to use all four parameters. Yet.

Mariette confirmed that attempts to fix the function broke many things in the application, so she did the only thing she could do: moved the function into the appropriate implementation files and surrounded it with comments describing its unusual behavior.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianIan Jackson: Otter - a game server for arbitrary board games

One of the things that I found most vexing about lockdown was that I was unable to play some of my favourite board games. There are online systems for many games, but not all. And online systems cannot support games like Mao where the players make up the rules as we go along.

I had an idea for how to solve this problem, and set about implementing it. The result is Otter (the Online Table Top Environment Renderer).

We have played a number of fun games of Penultima with it, and have recently branched out into Mao. The Otter is now ready to be released!

More about Otter

(cribbed shamelessly from the README)

Otter, the Online Table Top Environment Renderer, is an online game system.

But it is not like most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on.

So it’s something like a “tabletop simulator” (but it does not have any 3D, or a physics engine, or anything like that).

This means that with Otter:

  • Supporting a new game, that Otter doesn’t know about yet, would usually not involve writing or modifying any computer programs.

  • If Otter already has the necessary game elements (cards, say) all you need to do is write a spec file saying what should be on the table at the start of the game. For example, most Whist variants that start with a standard pack of 52 cards are already playable.

  • You can play games where the rules change as the game goes along, or are made up by the players, or are too complicated to write as a computer program.

  • House rules are no problem, since the computer isn’t enforcing the rules - you and your friends are.

  • Everyone can interact with different items on the game table, at any time. (Otter doesn’t know about your game’s turn-taking, so doesn’t know whose turn it might be.)

Installation and usage

Otter is fully functional, but the installation and account management arrangements are rather unsophisticated and un-webby. And there is not currently any publicly available instance you can use to try it out.

Users on chiark will find an instance there.

Other people who who are interested in hosting games (of Penultima or Mao, or other games we might support) will have to find a Unix host or VM to install Otter on, and will probably want help from a Unix sysadmin.

Otter is distributed via git, and is available on Salsa, Debian's gitlab instance.

There is documentation online.

Future plans

I have a number of ideas for improvement, which go off in many different directions.

Quite high up on my priority list is making it possible for players to upload and share game materials (cards, boards, pieces, and so on), rather than just using the ones which are bundled with Otter itself (or dumping files ad-hoc on the server). This will make it much easier to play new games. One big reason I wrote Otter is that I wanted to liberate boardgame players from the need to implemet their game rules as computer code.

The game management and account management is currently done with a command line tool. It would be lovely to improve that, but making a fully-featured management web ui would be a lot of work.

Screenshots!

(Click for the full size images.)



comment count unavailable comments

,

Planet DebianRussell Coker: IMA/EVM Certificates

I’ve been experimenting with IMA/EVM. Here is the Sourceforge page for the upstream project [1]. The aim of that project is to check hashes and maybe public key signatures on files before performing read/exec type operations on them. It can be used as the next logical step from booting a signed kernel with TPM. I am a long way from getting that sort of thing going, just getting the kernel to boot and load keys is my current challenge and isn’t helped due to the lack of documentation on error messages. This blog post started as a way of documenting the error messages so future people who google errors can get a useful result. I am not trying to document everything, just help people get through some of the first problems.

I am using Debian for my work, but some of this will apply to other distributions (particularly the kernel error messages). The Debian distribution has the ima-evm-utils but no other support for IMA/EVM. To get this going in Debian you need to compile your own kernel with IMA support and then boot it with kernel command-line options to enable IMA, in recent kernels that includes “lsm=integrity” as a mandatory requirement to prevent a kernel Oops after mounting the initrd (there is already a patch to fix this).

If you want to just use IMA (not get involved in development) then a good option would be to use RHEL (here is their documentation) [2] or SUSE (here is their documentation) [3]. Note that both RHEL and SUSE use older kernels so their documentation WILL lead you astray if you try and use the latest kernel.org kernel.

The Debian initrd

I created a script named /etc/initramfs-tools/hooks/keys with the following contents to copy the key(s) from /etc/keys to the initrd where the kernel will load it/them. The kernel configuration determines whether x509_evm.der or x509_ima.der (or maybe both) is loaded. I haven’t yet worked out which key is needed when.

#!/bin/bash

mkdir -p ${DESTDIR}/etc/keys
cp /etc/keys/* ${DESTDIR}/etc/keys

Making the Keys

#!/bin/sh

GENKEY=ima.genkey

cat << __EOF__ >$GENKEY
[ req ]
default_bits = 1024
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = v3_usr

[ req_distinguished_name ]
O = `hostname`
CN = `whoami` signing key
emailAddress = `whoami`@`hostname`

[ v3_usr ]
basicConstraints=critical,CA:FALSE
#basicConstraints=CA:FALSE
keyUsage=digitalSignature
#keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
#authorityKeyIdentifier=keyid,issuer
__EOF__

openssl req -new -nodes -utf8 -sha1 -days 365 -batch -config $GENKEY \
                -out csr_ima.pem -keyout privkey_ima.pem
openssl x509 -req -in csr_ima.pem -days 365 -extfile $GENKEY -extensions v3_usr \
                -CA ~/kern/linux-5.11.14/certs/signing_key.pem -CAkey ~/kern/linux-5.11.14/certs/signing_key.pem -CAcreateserial \
                -outform DER -out x509_evm.der

To get the below result I used the above script to generate a key, it is the /usr/share/doc/ima-evm-utils/examples/ima-genkey.sh script from the ima-evm-utils package but changed to use the key generated from kernel compilation to sign it. You can copy the files in the certs directory from one kernel build tree to another to have the same certificate and use the same initrd configuration. After generating the key I copied x509_evm.der to /etc/keys on the target host and built the initrd before rebooting.

[    1.050321] integrity: Loading X.509 certificate: /etc/keys/x509_evm.der
[    1.092560] integrity: Loaded X.509 cert 'xev: etbe signing key: 99d4fa9051e2c178017180df5fcc6e5dbd8bb606'

Errors

Here are some of the kernel error messages I received along with my best interpretation of what they mean.

[ 1.062031] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[ 1.063689] integrity: Problem loading X.509 certificate -74

Error -74 means -EBADMSG, which means there’s something wrong with the certificate file. I have got that from /etc/keys/x509_ima.der not being in der format and I have got it from a der file that contained a key pair that wasn’t signed.

[    1.049170] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[    1.093092] integrity: Problem loading X.509 certificate -126

Error -126 means -ENOKEY, so the key wasn’t in the file or the key wasn’t signed by the kernel signing key.

[    1.074759] integrity: Unable to open file: /etc/keys/x509_evm.der (-2)

Error -2 means -ENOENT, so the file wasn’t found on the initrd. Note that it does NOT look at the root filesystem.

References

Planet DebianJunichi Uekawa: Rewrote my pomodoro technique timer.

Rewrote my pomodoro technique timer. I've been iterating on how I operate and focus. Too much focus exhausts me. I'm trying out Focusmate's method of 50 minutes of focus time and 10 minutes of break. Here is a web app that tries to start the timer at the hour and starts break at the last 10 minutes.

Planet DebianBits from Debian: Debian Project Leader election 2021, Jonathan Carter re-elected.

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Jonathan Carter!

455 of 1,018 Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2021 page.

Many thanks to Jonathan Carter and Sruthi Chandran for their campaigns, and to our Developers for voting.

,

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.7: Micro Update

A new version of the RcppAPT package interfacing from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN yesterday. This comes a good year after the previous maintenance update for release 0.0.6.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

The maintenance release responds to call for updates from CRAN desiring that make all implicit dependencies on packages markdown and rmarkdown explicit via a Suggests: entry. Two of the many packages I maintain were part of the (large !!) list in the CRAN email, and this is one of them. While making the update, we refreshed two other packaging details.

Changes in version 0.0.7 (2021-04-16)

  • Add rmarkdown to Suggests: as an implicit conditional dependency

  • Switch vignette to minidown and its water framework, add minidown to Suggests as well

  • Update two URLs in the README.md file

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as as the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianChris Lamb: Tour d'Orwell: Wallington

Previously in George Orwell travel posts: Sutton Courtenay, Marrakesh, Hampstead, Paris, Southwold & The River Orwell.

§

Wallington is a small village in Hertfordshire, approximately fifty miles north of London and twenty-five miles from the outskirts of Cambridge. George Orwell lived at No. 2 Kits Lane, better known as 'The Stores', on a mostly-permanent basis from 1936 to 1940, but he would continue to journey up from London on occasional weekends until 1947.

His first reference to The Stores can be found in early 1936, where Orwell wrote from Lancashire during research for The Road to Wigan Pier to lament that he would very much like "to do some work again — impossible, of course, in the [current] surroundings":

I am arranging to take a cottage at Wallington near Baldock in Herts, rather a pig in a poke because I have never seen it, but I am trusting the friends who have chosen it for me, and it is very cheap, only 7s. 6d. a week [£20 in 2021].

For those not steeped in English colloquialisms, "a pig in a poke" is an item bought without seeing it in advance. In fact, one general insight that may be drawn from reading Orwell's extant correspondence is just how much he relied on a close network of friends, belying the lazy and hagiographical picture of an independent and solitary figure. (Still, even Orwell cultivated this image at times, such as in a patently autobiographical essay he wrote in 1946. But note the off-hand reference to varicose veins here, for they would shortly re-appear as a symbol of Winston's repressed humanity in Nineteen Eighty-Four.)

Nevertheless, the porcine reference in Orwell's idiom is particularly apt, given that he wrote the bulk of Animal Farm at The Stores — his 1945 novella, of course, portraying a revolution betrayed by allegorical pigs. Orwell even drew inspiration for his 'fairy story' from Wallington itself, principally by naming the novel's farm 'Manor Farm', just as it is in the village. But the allusion to the purchase of goods is just as appropriate, as Orwell returned The Stores to its former status as the village shop, even going so far as to drill peepholes in a door to keep an Orwellian eye on the jars of sweets. (Unfortunately, we cannot complete a tidy circle of references, as whilst it is certainly Napoleon — Animal Farm's substitute for Stalin — who is quoted as describing Britain as "a nation of shopkeepers", it was actually the maraisard Bertrand Barère who first used the phrase).

§

"It isn't what you might call luxurious", he wrote in typical British understatement, but Orwell did warmly emote on his animals. He kept hens in Wallington (perhaps even inspiring the opening line of Animal Farm: "Mr Jones, of the Manor Farm, had locked the hen-houses for the night, but was too drunk to remember to shut the pop-holes.") and a photograph even survives of Orwell feeding his pet goat, Muriel. Orwell's goat was the eponymous inspiration for the white goat in Animal Farm, a decidedly under-analysed character who, to me, serves to represent an intelligentsia that is highly perceptive of the declining political climate but, seemingly content with merely observing it, does not offer any meaningful opposition. Muriel's aesthetic of resistance, particularly in her reporting on the changes made to the Seven Commandments of the farm, thus rehearses the well-meaning (yet functionally ineffective) affinity for 'fact checking' which proliferates today. But I digress.

There is a tendency to "read Orwell backwards", so I must point out that Orwell wrote several other works whilst at The Stores as well. This includes his Homage to Catalonia, his aforementioned The Road to Wigan Pier, not to mention countless indispensable reviews and essays as well. Indeed, another result of focusing exclusively on Orwell's last works is that we only encounter his ideas in their highly-refined forms, whilst in reality, it often took many years for concepts to fully mature — we first see, for instance, the now-infamous idea of "2 + 2 = 5" in an essay written in 1939.

This is important to understand for two reasons. Although the ostentatiously austere Barnhill might have housed the physical labour of its writing, it is refreshing to reflect that the philosophical heavy-lifting of Nineteen Eighty-Four may have been performed in a relatively undistinguished North Hertfordshire village. But perhaps more importantly, it emphasises that Orwell was just a man, and that any of us is fully capable of equally significant insight, with — to quote Christopher Hitchens — "little except a battered typewriter and a certain resilience."

§

The red commemorative plaque not only limits Orwell's tenure to the time he was permanently in the village, it omits all reference to his first wife, Eileen O'Shaughnessy, whom he married in the village church in 1936.
Wallington's Manor Farm, the inspiration for the farm in Animal Farm. The lower sign enjoins the public to inform the police "if you see anyone on the [church] roof acting suspiciously". Non-UK-residents may be surprised to learn about the systematic theft of lead.

Planet DebianSteve Kemp: Having fun with CP/M on a Z80 single-board computer.

In the past, I've talked about building a Z80-based computer. I made some progress towards that goal, in the sense that I took the initial (trivial steps) towards making something:

  • I built a clock-circuit.
  • I wired up a Z80 processor to the clock.
  • I got the thing running an endless stream of NOP instructions.
    • No RAM/ROM connected, tying all the bus-lines low, meaning every attempted memory-read returned 0x00 which is the Z80 NOP instruction.

But then I stalled, repeatedly, at designing an interface to RAM and ROM, so that it could actually do something useful. Over the lockdown I've been in two minds about getting sucked back down the rabbit-hole, so I compromised. I did a bit of searching on tindie, and similar places, and figured I'd buy a Z80-based single board computer. My requirements were minimal:

  • It must run CP/M.
  • The source-code to "everything" must be available.
  • I want it to run standalone, and connect to a host via a serial-port.

With those goals there were a bunch of boards to choose from, rc2014 is the standard choice - a well engineered system which uses a common backplane and lets you build mini-boards to add functionality. So first you build the CPU-card, then the RAM card, then the flash-disk card, etc. Over-engineered in one sense, extensible in another. (There are some single-board variants to cut down on soldering overhead, at a cost of less flexibility.)

After a while I came across https://8bitstack.co.uk/, which describes a simple board called the the Z80 playground.

The advantage of this design is that it loads code from a USB stick, making it easy to transfer files to/from it, without the need for a compact flash card, or similar. The downside is that the system has only 64K RAM, meaning it cannot run CP/M 3, only 2.2. (CP/M 3.x requires more RAM, and a banking/paging system setup to swap between pages.)

When the system boots it loads code from an EEPROM, which then fetches the CP/M files from the USB-stick, copies them into RAM and executes them. The memory map can be split so you either have ROM & RAM, or you have just RAM (after the boot the ROM will be switched off). To change the initial stuff you need to reprogram the EEPROM, after that it's just a matter of adding binaries to the stick or transferring them over the serial port.

In only a couple of hours I got the basic stuff working as well as I needed:

  • A z80-assembler on my Linux desktop to build simple binaries.
  • An installation of Turbo Pascal 3.00A on the system itself.
  • An installation of FORTH on the system itself.
    • Which is nice.
  • A couple of simple games compiled from Pascal
    • Snake, Tetris, etc.
  • The Zork trilogy installed, along with Hitchhikers guide.

I had some fun with a CP/M emulator to get my hand back in things before the board arrived, and using that I tested my first "real" assembly language program (cls to clear the screen), as well as got the hang of using the wordstar keyboard shortcuts as used within the turbo pascal environment.

I have some plans for development:

  • Add command-line history (page-up/page-down) for the CP/M command-processor.
  • Add paging to TYPE, and allow terminating with Q.

Nothing major, but fun changes that won't be too difficult to implement.

Since CP/M 2.x has no concept of sub-directories you end up using drives for everything, I implemented a "search-path" so that when you type "FOO" it will attempt to run "A:FOO.COM" if there is no file matching on the current-drive. That's a nicer user-experience at all.

I also wrote some Z80-assembly code to search all drives for an executable, if not found in current drive and not already qualified. Remember CP/M doesn't have a concept of sub-directories) that's actually pretty useful:

  B>LOCATE H*.COM
  P:HELLO   COM
  P:HELLO2  COM
  G:HITCH   COM
  E:HYPHEN  COM

I've also written some other trivial assembly language tools, which was surprisingly relaxing. Especially once I got back into the zen mode of optimizing for size.

I forked the upstream repository, mostly to tidy up the contents, rather than because I want to go into my own direction. I'll keep the contents in sync, because there's no point splitting a community even further - I guess there are fewer than 100 of these boards in the wild, probably far far fewer!

,

Cryptogram Details on the Unlocking of the San Bernardino Terrorist’s iPhone

The Washington Post has published a long story on the unlocking of the San Bernardino Terrorist’s iPhone 5C in 2016. We all thought it was an Israeli company called Cellebrite. It was actually an Australian company called Azimuth Security.

Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person.

[…]

Using the flaw Dowd found, Wang, based in Portland, Ore., created an exploit that enabled initial access to the phone ­ a foot in the door. Then he hitched it to another exploit that permitted greater maneuverability, according to the people. And then he linked that to a final exploit that another Azimuth researcher had already created for iPhones, giving him full control over the phone’s core processor ­ the brains of the device. From there, he wrote software that rapidly tried all combinations of the passcode, bypassing other features, such as the one that erased data after 10 incorrect tries.

Apple is suing various companies over this sort of thing. The article goes into the details.

Krebs on SecurityDid Someone at the Commerce Dept. Find a SolarWinds Backdoor in Aug. 2020?

On Aug. 13, 2020, someone uploaded a suspected malicious file to VirusTotal, a service that scans submitted files against more than five dozen antivirus and security products. Last month, Microsoft and FireEye identified that file as a newly-discovered fourth malware backdoor used in the sprawling SolarWinds supply chain hack. An analysis of the malicious file and other submissions by the same VirusTotal user suggest the account that initially flagged the backdoor as suspicious belongs to IT personnel at the National Telecommunications and Information Administration (NTIA), a division of the U.S. Commerce Department that handles telecommunications and Internet policy.

Both Microsoft and FireEye published blog posts on Mar. 4 concerning a new backdoor found on high-value targets that were compromised by the SolarWinds attackers. FireEye refers to the backdoor as “Sunshuttle,” whereas Microsoft calls it “GoldMax.” FireEye says the Sunshuttle backdoor was named “Lexicon.exe,” and had the unique file signatures or “hashes” of “9466c865f7498a35e4e1a8f48ef1dffd” (MD5) and b9a2c986b6ad1eb4cfb0303baede906936fe96396f3cf490b0984a4798d741d8 (SHA-1).

“In August 2020, a U.S.-based entity uploaded a new backdoor that we have named SUNSHUTTLE to a public malware repository,” FireEye wrote.

The “Sunshuttle” or “GoldMax” backdoor, as identified by FireEye and Microsoft, respectively. Image: VirusTotal.com.

A search in VirusTotal’s malware repository shows that on Aug. 13, 2020 someone uploaded a file with that same name and file hashes. It’s often not hard to look through VirusTotal and find files submitted by specific users over time, and several of those submitted by the same user over nearly two years include messages and files sent to email addresses for people currently working in NTIA’s information technology department.

An apparently internal email that got uploaded to VirusTotal in Feb. 2020 by the same account that uploaded the Sunshuttle backdoor malware to VirusTotal in August 2020.

The NTIA did not respond to requests for comment. But in December 2020, The Wall Street Journal reported the NTIA was among multiple federal agencies that had email and files plundered by the SolarWinds attackers. “The hackers broke into about three dozen email accounts since June at the NTIA, including accounts belonging to the agency’s senior leadership, according to a U.S. official familiar with the matter,” The Journal wrote.

It’s unclear what, if anything, NTIA’s IT staff did in response to scanning the backdoor file back in Aug. 2020. But the world would not find out about the SolarWinds debacle until early December 2020, when FireEye first disclosed the extent of its own compromise from the SolarWinds malware and published details about the tools and techniques used by the perpetrators.

The SolarWinds attack involved malicious code being surreptitiously inserted into updates shipped by SolarWinds for some 18,000 users of its Orion network management software. Beginning in March 2020, the attackers then used the access afforded by the compromised SolarWinds software to push additional backdoors and tools to targets when they wanted deeper access to email and network communications.

U.S. intelligence agencies have attributed the SolarWinds hack to an arm of the Russian state intelligence known as the SVR, which also was determined to have been involved in the hacking of the Democratic National Committee six years ago. On Thursday, the White House issued long-expected sanctions against Russia in response to the SolarWinds attack and other malicious cyber activity, leveling economic sanctions against 32 entities and individuals for disinformation efforts and for carrying out the Russian government’s interference in the 2020 presidential election.

The U.S. Treasury Department (which also was hit with second-stage malware that let the SolarWinds attackers read Treasury email communications) has posted a full list of those targeted, including six Russian companies for providing support to the cyber activities of the Russian intelligence service.

Also on Thursday, the FBI, National Security Agency (NSA), and the Cybersecurity Infrastructure Security Administration (CISA) issued a joint advisory on several vulnerabilities in widely-used software products that the same Russian intelligence units have been attacking to further their exploits in the SolarWinds hack. Among those is CVE-2020-4006, a security hole in VMWare Workspace One Access that VMware patched in December 2020 after hearing about it from the NSA.

On December 18, VMWare saw its stock price dip 5.5 percent after KrebsOnSecurity published a report linking the flaw to NSA reports about the Russian cyberspies behind the SolarWinds attack. At the time, VMWare was saying it had received “no notification or indication that CVE-2020-4006 was used in conjunction with the SolarWinds supply chain compromise.” As a result, a number of readers responded that making this connection was tenuous, circumstantial and speculative.

But the joint advisory makes clear the VMWare flaw was in fact used by SolarWinds attackers to further their exploits.

“Recent Russian SVR activities include compromising SolarWinds Orion software updates, targeting COVID-19 research facilities through deploying WellMess malware, and leveraging a VMware vulnerability that was a zero-day at the time for follow-on Security Assertion Markup Language (SAML) authentication abuse,” the NSA’s advisory (PDF) reads. “SVR cyber actors also used authentication abuse tactics following SolarWinds-based breaches.”

Officials within the Biden administration have told media outlets that a portion of the United States’ response to the SolarWinds hack would not be discussed publicly. But some security experts are concerned that Russian intelligence officials may still have access to networks that ran the backdoored SolarWinds software, and that the Russians could use that access to affect a destructive or disruptive network response of their own, The New York Times reports.

“Inside American intelligence agencies, there have been warnings that the SolarWinds attack — which enabled the SVR to place ‘back doors’ in the computer networks — could give Russia a pathway for malicious activity against government agencies and corporations,” The Times observed.

Worse Than FailureError'd: Days of Future Passed

After reading through so many of your submissions these last few weeks, I'm beginning to notice certain patterns emerging. One of these patterns is that despite the fact that dates are literally as old as time, people seem pathologically prone to bungling them. Surely our readers are already familiar with the notable "Falsehoods Programmers Believe" series of blog posts, but if you happen somehow to have been living under an Internet rock (or a cabbage leaf) for the last few decades, you might start your time travails at Infinite Undo. The examples here are not the most egregious ever (there are better coming later or sooner) but they are today's:

Famished Dug S. peckishly pronounces "It's about time!"

 

Far luckier Zachary Palmer appears to have found the perfect solution to poor Dug's delayed dinner: "It took the shipping company a little bit to start moving my package, but they made up for it by shipping it faster than the speed of light," says he.

 

Patient Philip awaits his {ship,prince,processor}: " B&H hitting us with hard truth on when the new line of AMD CPUs will really be available."

 

While an apparent contemporary of the latest royal Eric R. creakily complains " This website for tracking my continuing education hours should be smart enough not to let me enter a date in the year 21 AD"

 

But as for His Lateness Himself, royal servant Steve A. has uncovered a scoop fit for Q:

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cryptogram NSA Discloses Vulnerabilities in Microsoft Exchange

Amongst the 100+ vulnerabilities patch in this month’s Patch Tuesday, there are four in Microsoft Exchange that were disclosed by the NSA.

Planet DebianDirk Eddelbuettel: Announcing ‘Introductions to Emacs Speaks Statistics’

A new website containing introductory videos and slide decks is now available for your perusal at ess-intro.github.io. It provides a series of introductions to the excellent Emacs Speaks Statistics (ESS) mode for the Emacs editor.

This effort started following my little tips, tricks, tools and toys series of short videos and slide decks “for the command-line and R, broadly-speaking”. Which I had mentioned to friends curious about Emacs, and on the ess-help mailing list. And lo and behold, over the fall and winter sixteen of us came together in one GitHub org and are now proud to present the initial batch of videos about first steps, installing, using with spaceemacs, customizing, and org-mode with ESS. More may hopefully fellow, the group is open and you too can join: see the main repo and its wiki.

This is in fact the initial announcement post, so it is flattering that we have already received over 350 views, four comments and twenty-one likes.

We hope it proves to be a useful starting point for some of you. The Emacs editor is quite uniquely powerful, and coupled with ESS makes for a rather nice environment for programming with data, or analysing, visualising, exploring, … data. But we are not zealots: there are many editors and environments under the sun, and most people are perfectly happy with their choice, which is wonderful. We also like ours, and sometimes someone asks ‘tell me more’ or ‘how do I start’. We hope this series satisifies this initial curiousity and takes it from here.

With that, my thanks to Frédéric, Alex, Tyler and Greg for the initial batch, and for everybody else in the org who chipped in with comments and suggestion. We hope it grows from here, so happy Emacsing with R from us!

,

Planet DebianIan Jackson: Dreamwidth blocking many RSS readers and aggregators

There is a serious problem with Dreamwidth, which is impeding access for many RSS reader tools.

This started at around 0500 UTC on Wednesday morning, according to my own RSS reader cron job. A friend found #43443 in the DW ticket tracker, where a user of a minority web browser found they were blocked.

Local tests demonstrated that Dreamwidth had applied blocking by the HTTP User-Agent header, and were rejecting all user-agents not specifically permitted. Today, this rule has been relaxed and unknown user-agents are permitted. But user-agents for general http client libraries are still blocked.

I'm aware of three unresolved tickets about this: #43444 #43445 #43447

We're told there by a volunteer member of Dreamwidth's support staff that this has been done deliberately for "blocking automated traffic". I'm sure the volunteer is just relaying what they've been told by whoever is struggling to deal with what I suppose is probably a spam problem. But it's still rather unsatisfactory.

I have suggested in my own ticket that a good solution might be to apply the new block only to posting and commenting (eg, maybe, by applying it only to HTTP POST requests). If the problem is indeed spam then that ought to be good enough, and would still let RSS readers work properly.

I'm told that this new blocking has been done by "implementing" (actually, configuring or enabling) "some AWS rules for blocking automated traffic". I don't know what facilities AWS provides. This kind of helplessness is of course precisely the kind of thing that the Free Software movement is against and precisely the kind of thing that proprietary services like AWS produce.

I don't know if this blog entry will appear on planet.debian.org and on other people's reader's and aggregators. I think it will at least be seen by other Dreamwidth users. I thought I would post here in the hope that other Dreamwidth users might be able to help get this fixed. At the very least other Dreamwidth blog owners need to know that many of their readers may not be seeing their posts at all.

If this problem is not fixed I will have to move my blog. One of the main points of having a blog is publishing it via RSS. RSS readers are of course based on general http client libraries and many if not most RSS readers have not bothered to customise their user-agent. Those are currently blocked.



comment count unavailable comments

Worse Than FailureCodeSOD: Constantly Counting

Steven was working on a temp contract for a government contractor, developing extensions to an ERP system. That ERP system was developed by whatever warm bodies happened to be handy, which meant the last "tech lead" was a junior developer who had no supervision, and before that it was a temp who was only budgeted to spend 2 hours a week on that project.

This meant that it was a great deal of spaghetti code, mashed together with a lot of special-case logic, and attempts to have some sort of organization even if that organization made no sense. Which is why, for example, all of the global constants for the application were required to be in a class Constants.

Of course, when you put a big pile of otherwise unrelated things in one place, you get some surprising results. Like this:

foreach (PurchaseOrder po in poList) { if (String.IsNullOrEmpty(po.PoNumber)) { Constants.NEW_COUNT++; CreatePoInOtherSystem(po); } }

Yes, every time this system passes a purchase order off to another system for processing, the "constant" NEW_COUNT gets incremented. And no, this wasn't the only variable "constant", because before long, the Constants class became the "pile of static variables" class.

`
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianMartin Michlmayr: ledger2beancount 2.6 released

I released version 2.6 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.6:

  • Round calculated total if needed for price==cost comparison
  • Add narration_tag config variable to set narration from metadata
  • Retain unconsummated payee/payer metadata
  • Ensure UTF-8 output and assume UTF-8 input
  • Document UTF-8 issue on Windows systems
  • Add option to move posting-level tags to the transaction itself
  • Add support for the alias sub-directive of account declarations
  • Add support for the payee sub-directive of account declarations
  • Support configuration file called .ledger2beancount.yaml
  • Fix uninitialised value warning in hledger mode
  • Print warning if account in assertion has sub-accounts
  • Set commodity for commodity-less balance assertion
  • Expand path name of beancount_header config variable
  • Document handling of buckets
  • Document pre- and post-processing examples
  • Add Dockerfile to create Docker image

Thanks to Alexander Baier, Daniele Nicolodi, and GitHub users bratekarate, faaafo and mefromthepast for various bug reports and other input.

Thanks to Dennis Lee for adding a Dockerfile and to Vinod Kurup for fixing a bug.

Thanks to Stefano Zacchiroli for testing.

You can get ledger2beancount from GitHub.

Cryptogram DNI’s Annual Threat Assessment

The office of the Director of National Intelligence released its “Annual Threat Assessment of the U.S. Intelligence Community.” Cybersecurity is covered on pages 20-21. Nothing surprising:

  • Cyber threats from nation states and their surrogates will remain acute.
  • States’ increasing use of cyber operations as a tool of national power, including increasing use by militaries around the world, raises the prospect of more destructive and disruptive cyber activity.
  • Authoritarian and illiberal regimes around the world will increasingly exploit digital tools to surveil their citizens, control free expression, and censor and manipulate information to maintain control over their populations.
  • During the last decade, state sponsored hackers have compromised software and IT service supply chains, helping them conduct operations — espionage, sabotage, and potentially prepositioning for warfighting.

The supply chain line is new; I hope the government is paying attention.

,

Cryptogram The FBI Is Now Securing Networks Without Their Owners’ Permission

In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.

Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.

On Tuesday, the FBI announced that it successfully received a court order to remove “hundreds” of these web shells from networks in the US.

This is nothing short of extraordinary, and I can think of no real-world parallel. It’s kind of like if a criminal organization infiltrated a door-lock company and surreptitiously added a master passkey feature, and then customers bought and installed those locks. And then if the FBI got a court order to fix all the locks to remove the master passkey capability. And it’s kind of not like that. In any case, it’s not what we normally think of when we think of a warrant. The links above have details, but I would like a legal scholar to weigh in on the implications of this.

Planet DebianRussell Coker: Basics of Linux Kernel Debugging

Firstly a disclaimer, I’m not an expert on this and I’m not trying to instruct anyone who is aiming to become an expert. The aim of this blog post is to help someone who has a single kernel issue they want to debug as part of doing something that’s mostly not kernel coding. I welcome comments about the second step to kernel debugging for the benefit of people who need more than this (which might include me next week). Also suggestions for people who can’t use a kvm/qemu debugger would be good.

Below is a command to run qemu with GDB. It should be run from the Linux kernel source directory. You can add other qemu options for a blog device and virtual networking if necessary, but the bug I encountered gave an oops from the initrd so I didn’t need to go further. The “nokaslr” is to avoid address space randomisation which deliberately makes debugging tasks harder (from a certain perspective debugging a kernel and compromising a kernel are fairly similar). Loading the bzImage is fine, gdb can map that to the different file it looks at later on.

qemu-system-x86_64 -kernel arch/x86/boot/bzImage -initrd ../initrd-$KERN_VER -curses -m 2000 -append "root=/dev/vda ro nokaslr" -gdb tcp::1200

The command to run GDB is “gdb vmlinux“, when at the GDB prompt you can run the command “target remote localhost:1200” to connect to the GDB server port 1200. Note that there is nothing special about port 1200, it was given in an example I saw and is as good as any other port. It is important that you run GDB against the “vmlinux” file in the main directory not any of the several stripped and packaged files, GDB can’t handle a bzImage file but that’s OK, it ends up much the same in RAM.

When the “target remote” command is processed the kernel will be suspended by the debugger, if you are looking for a bug early in the boot you may need to be quick about this. Using “qemu-system-x86_64” instead of “kvm” slows things down and can help in that regard. The bug I was hunting happened 1.6 seconds after kernel load with KVM and 7.8 seconds after kernel load with qemu. I am not aware of all the implications of the kvm vs qemu decision on debugging. If your bug is a race condition then trying both would be a good strategy.

After the “target remote” command you can debug the kernel just like any other program.

If you put a breakpoint on print_modules() that will catch the operation of printing an Oops which can be handy.

Worse Than FailureCodeSOD: The Truth and the Truth

When Andy inherited some C# code from a contracting firm, he gave it a quick skim. He saw a bunch of methods with names like IsAvailable or CanPerform…, but he also saw that it was essentially random as to whether or not these methods returned bool or string.

That didn't seem like a good thing, so he started to take a deeper look, and that's when he found this.

public ActionResult EditGroup(Group group) { string fleetSuccess = string.Empty; bool success = false; if(action != null) { fleetSuccess = updateGroup(group); } else { fleetSuccess = Boolean.TrueString; } success = updateExternalGroup(group); fleetSuccess += "&&&" + success; if (fleetSuccess.ToLower().Equals("true&&&true")) { GetActivityDataFromService(group, false); } return Json(fleetSuccess, JsonRequestBehavior.AllowGet); }

So, updateGroup returns a string containing a boolean (at least, we hope it contains a boolean). updateExternalGroup returns an actual boolean. If both of these things are true, than we want to invoke GetActivityDataFromService.

Clearly, the only way to do this comparison is to force everything into being a string, with a &&& jammed in the middle as a spacer. Uh, for readability, I guess? Maybe? I almost suspect someone thought they were inventing their own "and" operator and didn't want it to conflict with & or &&.

Or maybe, maybe their code was read aloud by Jeff Goldblum. "True, and-and-and true!" It's very clear they didn't think about whether or not they should do this.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.10.4.0.0 on CRAN: New Upstream ‘Plus’

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 852 other packages on CRAN.

This new release brings us the just release Armadillo 10.4.0. Upstream moves at a speed that is a little faster than the cadence CRAN likes. We release RcppArmadillo 0.10.2.2.0 on March 9; and upstream 10.3.0 came out shortly thereafter. We aim to accomodate CRAN with (roughly) monthly (or less frequent) releases) so by the time we were ready 10.4.0 had just come out.

As it turns, the full testing had a benefit. Among the (currently) 852 CRAN packages using RcppArmadillo, two were failing tests. This is due to a subtle, but important point. Early on we realized that it would be beneficial if the standard R control over random-number creation and seeding affected Armadillo too, which Conrad accomodated kindly with an optional RNG interface—which RcppArmadillo supplies. With recent changes he made, the R side saw normally-distributed draws (via the Armadillo interface) changed, which lead to the two changes. All hail unit tests. So I mentioned this to Conrad, and with the usual Chicago-Brisbane time difference late my evening a fix was in my inbox. The CRAN upload was then halted as I had missed that due to other changes he had made random draws from a Gamma would now call std::rand() which CRAN flags. Another email to Brisbane, another late (one-line) fix back and all was good. We still encountered one package with an error but flagged this as internal to that package’s setup, so Uwe let RcppArmadillo onto CRAN, I contacted that package’s maintainer—who was very receptive and a change should be forthcoming. So with all that we have 0.10.4.0.0 on CRAN giving us Armadillo 10.4.0.

The full set of changes follows. As Armadillo 10.3.0 was not uploaded to CRAN, its changes are included too.

Changes in RcppArmadillo version 0.10.4.0.0 (2021-04-12)

  • Upgraded to Armadillo release 10.4.0 (Pressure Cooker)

    • faster handling of triangular matrices by log_det()

    • added log_det_sympd() for log determinant of symmetric positive matrices

    • added ARMA_WARN_LEVEL configuration option, to control the degree of emitted warning messages

    • reduced the default degree of warning messages, so that failed decompositions, failed saving/loading, etc, no longer emit warnings

  • Apply one upstream corrections for arma::randn draws when using alternative (here R) generator, and arma::randg.

Changes in RcppArmadillo version 0.10.3.0.0 (2021-03-10)

  • Upgraded to Armadillo release 10.3 (Sunrise Chaos)

    • faster handling of symmetric positive definite matrices by pinv()

    • expanded .save() / .load() for dense matrices to handle coord_ascii format

    • for out of bounds access, element accessors now throw the more nuanced std::out_of_range exception, instead of only std::logic_error

    • improved quality of random numbers

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cryptogram More Biden Cybersecurity Nominations

News:

President Biden announced key cybersecurity leadership nominations Monday, proposing Jen Easterly as the next head of the Cybersecurity and Infrastructure Security Agency and John “Chris” Inglis as the first ever national cyber director (NCD).

I know them both, and think they’re both good choices.

More news.

Cryptogram Backdoor Added — But Found — in PHP

Unknown hackers attempted to add a backdoor to the PHP source code. It was two malicious commits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.

Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough — PHP is a juicy target.

Krebs on SecurityMicrosoft Patch Tuesday, April 2021 Edition

Microsoft today released updates to plug at least 110 security holes in its Windows operating systems and other products. The patches include four security fixes for Microsoft Exchange Server — the same systems that have been besieged by attacks on four separate (and zero-day) bugs in the email software over the past month. Redmond also patched a Windows flaw that is actively being exploited in the wild.

Nineteen of the vulnerabilities fixed this month earned Microsoft’s most-dire “Critical” label, meaning they could be used by malware or malcontents to seize remote control over vulnerable Windows systems without any help from users.

Microsoft released updates to fix four more flaws in Exchange Server versions 2013-2019 (CVE-2021-28480, CVE-2021-28481, CVE-2021-28482, CVE-2021-28483). Interestingly, all four were reported by the U.S. National Security Agency, although Microsoft says it also found two of the bugs internally. A Microsoft blog post published along with today’s patches urges Exchange Server users to make patching their systems a top priority.

Satnam Narang, staff research engineer at Tenable, said these vulnerabilities have been rated ‘Exploitation More Likely’ using Microsoft’s Exploitability Index.

“Two of the four vulnerabilities (CVE-2021-28480, CVE-2021-28481) are pre-authentication, meaning an attacker does not need to authenticate to the vulnerable Exchange server to exploit the flaw,” Narang said. “With the intense interest in Exchange Server since last month, it is crucial that organizations apply these Exchange Server patches immediately.”

Also patched today was a vulnerability in Windows (CVE-2021-28310) that’s being exploited in active attacks already. The flaw allows an attacker to elevate their privileges on a target system.

“This does mean that they will either need to log on to a system or trick a legitimate user into running the code on their behalf,” said Dustin Childs of Trend Micro. “Considering who is listed as discovering this bug, it is probably being used in malware. Bugs of this nature are typically combined with other bugs, such as browser bug of PDF exploit, to take over a system.”

In a technical writeup on what they’ve observed since finding and reporting attacks on CVE-2021-28310, researchers at Kaspersky Lab noted the exploit they saw was likely used together with other browser exploits to escape “sandbox” protections of the browser.

“Unfortunately, we weren’t able to capture a full chain, so we don’t know if the exploit is used with another browser zero-day, or coupled with known, patched vulnerabilities,” Kaspersky’s researchers wrote.

Allan Laska, senior security architect at Recorded Future, notes that there are several remote code execution vulnerabilities in Microsoft Office products released this month as well. CVE-2021-28454 and CVE-2021-28451 involve Excel, while CVE-2021-28453 is in Microsoft Word and CVE-2021-28449 is in Microsoft Office. All four vulnerabilities are labeled by Microsoft as “Important” (not quite as bad as “Critical”). These vulnerabilities impact all versions of their respective products, including Office 365.

Other Microsoft products that got security updates this month include Edge (Chromium-based), Azure and Azure DevOps Server, SharePoint Server, Hyper-V, Team Foundation Server, and Visual Studio.

Separately, Adobe has released security updates for Photoshop, Digital Editions, RoboHelp, and Bridge.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any kinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

LongNowTouching the Future

Aboriginal fish traps.

In search of a new story for the future of artificial intelligence, Long Now speaker Genevieve Bell looks back to its cybernetic origins — and keeps on looking, thousands of years into the past.

From her new essay in Griffith Review:

In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell, “Touching the Future” in Griffith Review.

Cryptogram Cybersecurity Experts to Follow on Twitter

Security Boulevard recently listed the “Top-21 Cybersecurity Experts You Must Follow on Twitter in 2021.” I came in at #7. I thought that was pretty good, especially since I never tweet. My Twitter feed just mirrors my blog. (If you are one of the 134K people who read me from Twitter, “hi.”)

Worse Than FailureCodeSOD: A Form of Reuse

Writing code that is reusable is an important part of software development. In a way, we're not simply solving the problem at hand, but we're building tools we can use to solve similar problems in the future. Now, that's also a risk: premature abstraction is its own source of WTFs.

Daniel's peer wrote some JavaScript which is used for manipulating form inputs on customer contact forms. You know the sorts of forms: give us your full name, phone number, company name, email, and someone from our team will be in touch. This developer wrote the script, and offered it to clients to enhance their forms. Well, there was one problem: this script would get embedded in customer contact forms, but not all customer contact forms use the same conventions for how they name their fields.

There's an easy solution for that, involving parameterizing the code or adding a configuration step. There's a hard solution, where you build a heuristic that works for most forms. Then there's this solution, which… well…. Let me present the logic for handling just one field type, unredacted or elided.

for(llelementlooper=0; llelementlooper<document.forms[llformlooper2].elements.length; llelementlooper++) { var llelementphone = (document.forms[llformlooper2].elements[llelementlooper].name) if ( llformphone == '' && ((llelementphone=='phone') || (llelementphone=='Phone') || (llelementphone=='phone') || (llelementphone=='mobilephone') || (llelementphone=='PHONE') || (llelementphone=='sPhone') || (llelementphone=='strPhone') || (llelementphone=='Telephone') || (llelementphone=='telephone') || (llelementphone=='tel') || (llelementphone=='si_contact_ex_field6') || (llelementphone=='phonenumber') || (llelementphone=='phone_number') || (llelementphone=='phoneTextBox') || (llelementphone=='PhoneNumber_num_25_1') || (llelementphone=='Telefone') || (llelementphone=='Contact Phone') || (llelementphone=='submitted[row_3][phone]') || (llelementphone=='edit-profile-phone') || (llelementphone=='contactTelephone') || (llelementphone=='f4') || (llelementphone=='Contact-Phone') || (llelementphone=='formItem_239') || (llelementphone=='phone_r') || (llelementphone=='PhoneNo') || (llelementphone=='LeadGen_ContactForm_98494_m0:Phone') || (llelementphone=='telefono') || (llelementphone=='ntelephone') || (llelementphone=='wtelephone') || (llelementphone=='watelephone') || (llelementphone=='form[telefoon]') || (llelementphone=='phone_work') || (llelementphone=='telephone-number') || (llelementphone=='ctl00$HeaderText$ctl00$PhoneText') || (llelementphone=='ctl00$ctl00$cphMain$cphInsideMain$widget1$ctl00$viewBiz$ctl00$phone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolderBase$ContentPlaceHolderSideMenu$TextBoxPhone') || (llelementphone=='ctl00$SPWebPartManager1$g_c8bd31c3_e338_41df_bdbe_021242ca01c8$ctl01$ctl06$txtTextbox') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$MasterContentPlaceHolder$txtPhone') || (llelementphone=='curftelephone') || (llelementphone=='form[Telephone]') || (llelementphone=='tx_pilmailform_pi1[text][phone]') || (llelementphone=='ctl00$ctl00$templateMainContent$homeBanners$HomeBannerList$ctrLeads$txt_5_1') || (llelementphone=='ac_daytimeNumber') || (llelementphone=='daytime_phone') || (llelementphone=='r4') || (llelementphone=='ctl00$ContentPlaceHolderBody$Phone') || (llelementphone=='Fld10_label') || (llelementphone=='field333') || (llelementphone=='txtMobile') || (llelementphone=='form_nominator_phonenumber') || (llelementphone=='submitted[phone_no]') || (llelementphone=='submitted[phone]') || (llelementphone=='submitted[5]') || (llelementphone=='submitted[telephone_no]') || (llelementphone=='fields[Contact Phone]') || (llelementphone=='cf2_field_5') || (llelementphone=='a23786') || (llelementphone=='rpr_phone') || (llelementphone=='phone-number') || (llelementphone=='txt_homePhone') || (llelementphone=='your-number') || (llelementphone=='Contact_Phone') || (llelementphone=='ctl00$CPH_body$txtContactnumber') || (llelementphone=='profile_telephone') || (llelementphone=='item_meta[90]' && llfrmid==11823) || (llelementphone=='item_meta[181]' && llfrmid==26416) || (llelementphone=='input_4' && llfrmid==21452) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField205' && llfrmid==13948) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField166' && llfrmid==13948) || (llelementphone=='EditableTextField104' && llfrmid==13948) || (llelementphone=='cf2_field_4' && llfrmid==23878) || (llelementphone=='input_4' && llfrmid==24017) || (llelementphone=='cf_field_4' && llfrmid==15876) || (llelementphone=='cf5_field_5' && llfrmid==15876) || (llelementphone=='input_9' && llfrmid==17254) || (llelementphone=='input_2' && llfrmid==22954) || (llelementphone=='input_8' && llfrmid==23756) || (llelementphone=='input_3' && llfrmid==18793) || (llelementphone=='input_6' && llfrmid==24811) || (llelementphone=='input_3' && llfrmid==19880) || (llelementphone=='input_6' && llfrmid==19230) || (llelementphone=='input_3' && llfrmid==24747) || (llelementphone=='input_4' && llfrmid==25897) || (llelementphone=='text-481' && llfrmid==14451) || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7168$formField_7673') || (llelementphone=='Form7116$formField_7592') || (llelementphone=='Form7150$formField_7645') || (llelementphone=='Form7153$formField_7655') || (llelementphone=='Form7119$formField_7600') || (llelementphone=='Form7123$formField_7608') || (llelementphone=='Form7161$formField_7665') || (llelementphone=='Form7176$formField_7690') || (llelementphone=='Form7172$formField_7681') || (llelementphone=='Form7113$formField_7584') || (llelementphone=='Form7106$formField_7568') || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7136$formField_7628') || (llelementphone=='Form6482$formField_7621') || (llelementphone=='Form6548$formField_6988') || (llelementphone=='submitted[business_phone]') || (llelementphone=='tfa_3' && llfrmid==23388) || (llelementphone=='ContentObjectAttribute_ezsurvey_answer_4455_3633') || (llelementphone=='838ae21c-1f95-488f-a511-135a588a50fb_Phone') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone$txt1st') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentAreaPlaceholderMain$ctl02$ContactForm_3$TextBoxTelephone') || (llelementphone=='plc$lt$Content2$pageplaceholder1$pageplaceholder1$lt$Content$BizForm$viewBiz$ctl00$Phone_Number') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C002$tbTelephone') || (llelementphone=='contact$tbPhoneNumber') || (llelementphone=='crMain$ctl00$txtPhone') || (llelementphone=='ctl00$PrimaryContent$tbPhone') || (llelementphone=='ff_nm_phone[]') || (llelementphone=='q5_phoneNumber5[phone]') || (llelementphone=='TechContactPhone') || (llelementphone=='referral_phone_number') || (llelementphone=='field8418998') || (llelementphone=='ctl00$Content$ctl00$txtPhone') || (llelementphone=='ctl00$PlaceHolderMain$ucContactUs$txtPhone') || (llelementphone=='m_field_id_4' && llfrmid==15091) || (llelementphone=='Field7' && llfrmid==23387) || (llelementphone=='input_4' && llfrmid==22578) || (llelementphone=='input_2' && llfrmid==11241) || (llelementphone=='input_7' && llfrmid==23633) || (llelementphone=='input_7' && llfrmid==22114) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('demo') != -1) && llfrmid==17544) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact') != -1) && llfrmid==17544) || (llelementphone=='field_4' && llfrmid==24654) || (llelementphone=='input_6' && llfrmid==24782) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==16794) || (llelementphone=='input_3' && (llformalyzerURL.indexOf('try-and-buy') != -1) && llfrmid==16794) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==23842) || (llelementphone=='input_4' && llfrmid==25451) || (llelementphone=='input_5' && llfrmid==24911) || (llelementphone=='input_3' && llfrmid==13417) || (llelementphone=='input_4' && llfrmid==23813) || (llelementphone=='input_4' && llfrmid==21483) || (llelementphone=='input_3' && llfrmid==25396) || (llelementphone=='input_3' && llfrmid==16175) || (llelementphone=='input_7' && llfrmid==25797) || (llelementphone=='input_4' && llfrmid==15650) || (llelementphone=='input_3' && llfrmid==22025) || (llelementphone=='input_3' && llfrmid==14534) || (llelementphone=='input_4' && llfrmid==25216) || (llelementphone=='input_5' && llfrmid==22884) || (llelementphone=='input_6' && llfrmid==25783) || (llelementphone=='text-747' && llfrmid==16324) || (llelementphone=='vfb-42' && llfrmid==24468) || (llelementphone=='vfb-33' && llfrmid==24468) || (llelementphone=='item_meta[57]' && llfrmid==25268) || (llelementphone=='item_meta[78]' && llfrmid==25268) || (llelementphone=='item_meta[85]' && llfrmid==25268) || (llelementphone=='item_meta[154]' && llfrmid==25268) || (llelementphone=='item_meta[220]' && llfrmid==25268) || (llelementphone=='item_meta[240]' && llfrmid==25268) || (llelementphone=='item_meta[286]' && llfrmid==25268) || (llelementphone=='fieldname5' && llfrmid==12535) || (llelementphone=='Question12' && llfrmid==24639) || (llelementphone=='ninja_forms_field_4' && llfrmid==19321) || (llelementphone=='EditableTextField' && llfrmid==15064) || (llelementphone=='form_fields[27]' && llfrmid==22688) || (llelementphone=='ctl00$body$phone') || (llelementphone=='ctl00$MainContent$txtPhone') || (llelementphone=='FreeTrialForm$Phone') || (llelementphone=='text-521ada035aa46') || (llelementphone=='C_BusPhone') || (llelementphone=='ctl00$ctl00$templateMainContent$pageContent$ctrLeads$txt_5_1') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1204') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1320') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1242') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1202') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl08$1242') || (llelementphone=='ctl00$MainColumnPlaceHolder$uxPhone') || (llelementphone=='ctl00$MainContent$DropZoneTop$columnDisplay$ctl04$controlcolumn$ctl00$WidgetHost$WidgetHost_widget$IDPhone') || (llelementphone=='ctl00$ctl05$txtPhone') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1219') || (llelementphone=='LeadGen_ContactForm_33872_m419365:Phone') || (llelementphone=='F02220803') || (llelementphone=='h2c0f') || (llelementphone=='your_phone_number') || (llelementphone=='Question7') || (llelementphone=='Question51') || (llelementphone=='Question59') || (llelementphone=='Question35') || (llelementphone=='Question67') || (llelementphone=='field9740823') || (llelementphone=='message[phone]') || (llelementphone=='dnn$ctr1266$ViewKamakuraRegister$Phone') || (llelementphone=='phone1') || (llelementphone=='inf_field_Phone1') || (llelementphone=='hscontact_phone') || (llelementphone=='data[Contact][phone]') || (llelementphone=='fields[Phone]') || (llelementphone=='contact[PhoneNumber]') || (llelementphone=='phonename3') || (llelementphone=='UserPhone') || (llelementphone=='ctl00$MainBody$txtPhoneTech') || (llelementphone=='Telephone1') || (llelementphone=='PhoneNumber') || (llelementphone=='work_phone') || (llelementphone=='jform[contact_telephone]') || (llelementphone=='form[phone]') || (llelementphone=='RequestAQuote1$txtPhone') || (llelementphone=='06_Phone') || (llelementphone=='txtPhone') || (llelementphone=='field_location[und][0][phone]') || (llelementphone=='your-phone') || (llelementphone=='cmsForms_phone') || (llelementphone=='Txt_phonenumber') || (llelementphone=='businessPhone') || (llelementphone=='boxHomePhone') || (llelementphone=='HomePhone') || (llelementphone=='request-phone') || (llelementphone=='user[phone]') || (llelementphone=='DATA[PHONE]') || (llelementphone=='ctl00$ctl00$ctl00$cphContent$cphContent$cphContent$Phone') || (llelementphone=='ctl00$MainBody$Form1$obj11') || (llelementphone=='LeadGen_ContactForm_90888_m1467651:Phone') || (llelementphone=='Users[work]') || (llelementphone=='Question43') || (llelementphone=='aics_phone') || (llelementphone=='form[workphone]') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C006$tbTelephone') || (llelementphone=='cntnt01fbrp__47') || (llelementphone=='submitted[phone_number]') || (llelementphone=='flipform_phone') || (llelementphone=='txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder2$txtPhnno') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$ContentPlaceHolder1$mainContentRegion$BizFormControl1$Bizform1$ctl00$Phone') || (llelementphone=='inpPhone') || (llelementphone=='j_phone') || (llelementphone=='m6e81afbrp__53') || (llelementphone=='item_meta[119]') || (llelementphone=='ctl00$ContentPlaceHolder_Content$dataPhone') || (llelementphone=='ctl00$generalContentPlaceHolder$ctrlContactUs$tbPhone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentPlaceHolder1$Contact_6$txtPhone') || (llelementphone=='ctl00$MainContent$tel') || (llelementphone=='dynform_element_3') || (llelementphone=='telephone_1') || (llelementphone=='cf_phone') || (llelementphone=='Lead_PrimaryPhone') || (llelementphone=='p_lt_zoneContent_wP_wP_lt_zonePageWidgets_RevolabsMicrosoftDynamicsCRMContactForm_1_txtBusinessPhone') || (llelementphone=='si_contact_ex_field2') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl06$txtTelephone') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$mainCopy$CPHCenter$ctl00$QuickRegControl_2$TBPhone') || (llelementphone=='LeadGen_ContactForm_38163_m457931:Phone') || (llelementphone=='LeadGen_ContactForm_29909_m371524:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_31530_m388101:Phone') || (llelementphone=='LeadGen_ContactForm_27072_m349818:Phone') || (llelementphone=='LeadGen_ContactForm_28362_m354522:Phone') || (llelementphone=='LeadGen_ContactForm_28759_m358745:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_33631_m415978:Phone') || (llelementphone=='LeadGen_ContactForm_30695_m380436:Phone') || (llelementphone=='LeadGen_ContactForm_29958_m372138:Phone') || (llelementphone=='LeadGen_ContactForm_31471_m387422:Phone') || (llelementphone=='LeadGen_ContactForm_32514_m397613:Phone') || (llelementphone=='LeadGen_ContactForm_29152_m362772:Phone') || (llelementphone=='LeadGen_ContactForm_32540_m397908:Phone') || (llelementphone=='pNumber') || (llelementphone=='organizer_phone') || (llelementphone=='ctl00$PlaceHolderMain$TrialDownloadForm$Phone') || (llelementphone=='ContactSubmission.Phone.Value') || (llelementphone=='ctl00$body$txtPhone') || (llelementphone=='p$lt$ctl03$pageplaceholder$p$lt$zoneCentre$editabletext$ucEditableText$widget1$ctl00$viewBiz$ctl00$Telephone$textbox') || (llelementphone=='ctl01_ctl00_pbForm1_ctl_phone_61f3') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl15$Phone') || (llelementphone=='p$lt$zoneContent$pageplaceholder$p$lt$zoneRightContent$contentText$ucEditableText$BizFormControl1$Bizform1$ctl00$Telephone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder$ContentPlaceHolder$ctl00$fPhone') || (llelementphone=='pagecolumns_0$form_B502CC1EC1644B38B722523526D45F36$field_6BCFC01A782747DF8E785B5533850EEB') || (llelementphone=='cf3_field_10') || (llelementphone=='r_phone') || (llelementphone=='c_phone') || (llelementphone=='cf-1[]') || (llelementphone=='frm_phone') || (llelementphone=='Patient_Phone_Number') || (llelementphone=='ctl00$PageContent$ctl00$txtPhone') || (llelementphone=='dnn$ctr398$FormMaster$ctl_6e49bedd138a4684a66b62dcb1a34658') || (llelementphone=='id_tel') || (llelementphone=='field_contact_tel[und][0][value]') || (llelementphone=='Phone:') || (llelementphone=='ContactPhone') || (llelementphone=='submitted[telephone]') || (llelementphone=='ctl00$ContentPlaceHolder1$ctl04$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder_pageContent$contact_phone') || (llelementphone=='264') || (llelementphone=='form_phone_number') || (llelementphone=='field8418998') || (llelementphone=='phoneTBox') || (llelementphone=='pagecontent_1$content_0$contentbottom_0$txtPhone') || (llelementphone=='application_0$PhoneTextBox') || (llelementphone=='submitted[phone_work]') || (llelementphone=='data[Lead][phone]') || (llelementphone=='a4475-telephone') || (llelementphone=='ctl00$Form$txtPhoneNumber') || (llelementphone=='signup_form_data[Phone]') || (llelementphone=='WorkPhone') || (llelementphone=='lldPhone') || (llelementphone=='web_form_1[field_102]value') || (llelementphone=='LeadGen_ContactForm_114694_m1832700:Phone') || (llelementphone=='phoneSalesForm') || (llelementphone=='fund_phone') || (llelementphone=='Phonepi_Phone') || (llelementphone=='field343') || (llelementphone=='cntnt01fbrp__48') || (llelementphone=='contact[phone]') || (llelementphone=='ctl00_ContentPlaceHolder1_ctl01_contactTelephoneBox_text') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl29$Phone') || (llelementphone=='plc$lt$content$pageplaceholder$pageplaceholder$lt$bodyColumnZone$LogilityContactUs$txtWorkPhone') || (llelementphone=='ctl00$ctl00$ctl00$cphBody$cphMain$cphMain$FormBuilder1$FormBuilderListView$ctrl4$FieldControl_Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$cp_content$ctl02$RenderForm_1$rpFieldsets$ctl00$rpFields$ctl04$126d33a3_9f7f_4583_8c94_5820d58fc030') || (llelementphone=='tx_powermail_pi1[uid1266]') || (llelementphone=='si_contact_ex_field3') || (llelementphone=='inc_contact1$txtPhone') || (llelementphone=='item2_tel_1') || (llelementphone=='LeadGen_ContactForm_15766_m0:Phone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtPhone') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='ctl00$SecondaryPageContent$C005$ctl00$ctl00$C002$ctl00$ctl00$textBox_write') || (llelementphone=='_u216318653597056311') || (llelementphone=='_u630018292785751084') || (llelementphone=='data[Contact][office_phone]') || (llelementphone=='ctl00$ctl00$cphMainContent$Content$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtTel') || (llelementphone=='item_5') || (llelementphone=='ques_21432') || (llelementphone=='phoneNum') || (llelementphone=='CONTACT_PHONE') || (llelementphone=='ff_nm_cf_phonetext[]') || (llelementphone=='WorkPhone') ) ) { llformphone = (document.forms[llformlooper2].elements[llelementlooper].value); if (llfrmid == debugid ) {alert('llformphone:'+llformphone+' llemailfound:'+llemailfound);} }

If the name property of the form element is equal to any one of the many many many items in this list, we can then extract the value and stuff it into a variable. And, since this will almost certainly break all the time, it's got a convenient "set the debugid and I'll spam alerts as I search the form".

Repeat this for every other field. It ends up being almost 2,000 lines of code, just to select the correct fields out of the forms.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianFrançois Marier: Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

Planet DebianShirish Agarwal: what to write

First up, I am alive and well. I have been receiving calls from friends for quite sometime but now that I have become deaf, it is a pain and the hearing aids aren’t all that useful. But moreover, where we have been finding ourselves each and every day sinking lower and lower feels absurd as to what to write and not write about India. Thankfully, I ran across this piece which does tell in far more detail than I ever could. The only interesting and somewhat positive news I had is from south of India otherwise sad days, especially for the poor. The saddest story is that this time Covid has reached alarming proportions in India and surprise, surprise this time the villain for many is my state of Maharashtra even though it hasn’t received its share of GST proceeds for last two years and this was Kerala’s perspective, different state, different party, different political ideology altogether.

Kerala Finance Minister Thomas Issac views on GST, October 22, 2020 Indian Express.

I briefly also share the death of somewhat liberal Film censorship in India unlike Italy which abolished film censorship altogether. I don’t really want spend too much on how we have become No. 2 in Covid cases in the world and perhaps death also. Many people still believe in herd immunity but don’t really know what it means. So without taking too much time and effort, bid adieu. May post when I’m hopefully emotionally feeling better, stronger 😦

,

Planet DebianSteinar H. Gunderson: Squirrel!

“All comments on this article will now be moderated. The bar to pass moderation will be high, it's really time to think about something else. Did you all see that we have an exciting article on spinlocks?” Poor LWN <3

SPINLOCK!

MEYama

I’ve just setup the Yama LSM module on some of my Linux systems. Yama controls ptrace which is the debugging and tracing API for Unix systems. The aim is to prevent a compromised process from using ptrace to compromise other processes and cause more damage. In most cases a process which can ptrace another process which usually means having capability SYS_PTRACE (IE being root) or having the same UID as the target process can interfere with that process in other ways such as modifying it’s configuration and data files. But even so I think it has the potential for making things more difficult for attackers without making the system more difficult to use.

If you put “kernel.yama.ptrace_scope = 1” in sysctl.conf (or write “1” to /proc/sys/kernel/yama/ptrace_scope) then a user process can only trace it’s child processes. This means that “strace -p” and “gdb -p” will fail when run as non-root but apart from that everything else will work. Generally “strace -p” (tracing the system calls of another process) is of most use to the sysadmin who can do it as root. The command “gdb -p” and variants of it are commonly used by developers so yama wouldn’t be a good thing on a system that is primarily used for software development.

Another option is “kernel.yama.ptrace_scope = 3” which means no-one can trace and it can’t be disabled without a reboot. This could be a good option for production servers that have no need for software development. It wouldn’t work well for a small server where the sysadmin needs to debug everything, but when dozens or hundreds of servers have their configuration rolled out via a provisioning tool this would be a good setting to include.

See Documentation/admin-guide/LSM/Yama.rst in the kernel source for the details.

When running with capability SYS_PTRACE (IE root shell) you can ptrace anything else and if necessary disable Yama by writing “0” to /proc/sys/kernel/yama/ptrace_scope .

I am enabling mode 1 on all my systems because I think it will make things harder for attackers while not making things more difficult for me.

Also note that SE Linux restricts SYS_PTRACE and also restricts cross-domain ptrace access, so the combination with Yama makes things extra difficult for an attacker.

Yama is enabled in the Debian kernels by default so it’s very easy to setup for Debian users, just edit /etc/sysctl.d/whatever.conf and it will be enabled on boot.

Krebs on SecurityParkMobile Breach Exposes License Plate Data, Mobile Numbers of 21M Users

Someone is selling account information for 21 million customers of ParkMobile, a mobile parking app that’s popular in North America. The stolen data includes customer email addresses, dates of birth, phone numbers, license plate numbers, hashed passwords and mailing addresses.

KrebsOnSecurity first heard about the breach from Gemini Advisory, a New York City based threat intelligence firm that keeps a close eye on the cybercrime forums. Gemini shared a new sales thread on a Russian-language crime forum that included my ParkMobile account information in the accompanying screenshot of the stolen data.

Included in the data were my email address and phone number, as well as license plate numbers for four different vehicles we have used over the past decade.

Asked about the sales thread, Atlanta-based ParkMobile said the company published a notification on Mar. 26 about “a cybersecurity incident linked to a vulnerability in a third-party software that we use.”

“In response, we immediately launched an investigation with the assistance of a leading cybersecurity firm to address the incident,” the notice reads. “Out of an abundance of caution, we have also notified the appropriate law enforcement authorities. The investigation is ongoing, and we are limited in the details we can provide at this time.”

The statement continues: “Our investigation indicates that no sensitive data or Payment Card Information, which we encrypt, was affected. Meanwhile, we have taken additional precautionary steps since learning of the incident, including eliminating the third-party vulnerability, maintaining our security, and continuing to monitor our systems.”

Asked for clarification on what the attackers did access, ParkMobile confirmed it included basic account information – license plate numbers, and if provided, email addresses and/or phone numbers, and vehicle nickname.

“In a small percentage of cases, there may be mailing addresses,” spokesman Jeff Perkins said.

ParkMobile doesn’t store user passwords, but rather it stores the output of a fairly robust one-way password hashing algorithm called bcrypt, which is far more resource-intensive and expensive to crack than common alternatives like MD5. The database stolen from ParkMobile and put up for sale includes each user’s bcrypt hash.

“You are correct that bcrypt hashed and salted passwords were obtained,” Perkins said when asked about the screenshot in the database sales thread.

“Note, we do not keep the salt values in our system,” he said. “Additionally, the compromised data does not include parking history, location history, or any other sensitive information. We do not collect social security numbers or driver’s license numbers from our users.”

ParkMobile says it is finalizing an update to its support site confirming the conclusion of its investigation. But I wonder how many of its users were even aware of this security incident. The Mar. 26 security notice does not appear to be linked to other portions of the ParkMobile site, and it is absent from the company’s list of recent press releases.

It’s also curious that ParkMobile hasn’t asked or forced its users to change their passwords as a precautionary measure. I used the ParkMobile app to reset my password, but there was no messaging in the app that suggested this was a timely thing to do.

So if you’re a ParkMobile user, changing your account password might be a pro move. If it’s any consolation, whoever is selling this data is doing so for an insanely high starting price ($125,000) that is unlikely to be paid by any cybercriminal to a new user with no reputation on the forum.

More importantly, if you used your ParkMobile password at any other site tied to the same email address, it’s time to change those credentials as well (and stop re-using passwords).

The breach comes at a tricky time for ParkMobile. On March 9, the European parking group EasyPark announced its plans to acquire the company, which operates in more than 450 cities in North America.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 02)

This week on my podcast, part two of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

MERiverdale

I’ve been watching the show Riverdale on Netflix recently. It’s an interesting modern take on the Archie comics. Having watched Josie and the Pussycats in Outer Space when I was younger I was anticipating something aimed towards a similar audience. As solving mysteries and crimes was apparently a major theme of the show I anticipated something along similar lines to Scooby Doo, some suspense and some spooky things, but then a happy ending where criminals get arrested and no-one gets hurt or killed while the vast majority of people are nice. Instead the first episode has a teen being murdered and Ms Grundy being obsessed with 15yo boys and sleeping with Archie (who’s supposed to be 15 but played by a 20yo actor).

Everyone in the show has some dark secret. The filming has a dark theme, the sky is usually overcast and it’s generally gloomy. This is a significant contrast to Veronica Mars which has some similarities in having a young cast, a sassy female sleuth, and some similar plot elements. Veronica Mars has a bright theme and a significant comedy element in spite of dealing with some dark issues (murder, rape, child sex abuse, and more). But Riverdale is just dark. Anyone who watches this with their kids expecting something like Scooby Doo is in for a big surprise.

There are lots of interesting stylistic elements in the show. Lots of clothing and uniform designs that seem to date from the 1940’s. It seems like some alternate universe where kids have smartphones and laptops while dressing in the style of the 1940s. One thing that annoyed me was construction workers using tools like sledge-hammers instead of excavators. A society that has smart phones but no earth-moving equipment isn’t plausible.

On the upside there is a racial mix in the show that more accurately reflects American society than the original Archie comics and homophobia is much less common than in most parts of our society. For both race issues and gay/lesbian issues the show treats them in an accurate way (portraying some bigotry) while the main characters aren’t racist or homophobic.

I think it’s generally an OK show and recommend it to people who want a dark show. It’s a good show to watch while doing something on a laptop so you can check Wikipedia for the references to 1940s stuff (like when Bikinis were invented). I’m half way through season 3 which isn’t as good as the first 2, I don’t know if it will get better later in the season or whether I should have stopped after season 2.

I don’t usually review fiction, but the interesting aesthetics of the show made it deserve a review.

MEStorage Trends 2021

The Viability of Small Disks

Less than a year ago I wrote a blog post about storage trends [1]. My main point in that post was that disks smaller than 2TB weren’t viable then and 2TB disks wouldn’t be economically viable in the near future.

Now MSY has 2TB disks for $72 and 2TB SSD for $245, saving $173 if you get a hard drive (compared to saving $240 10 months ago). Given the difference in performance and noise 2TB hard drives won’t be worth using for most applications nowadays.

NVMe vs SSD

Last year NVMe prices were very comparable for SSD prices, I was hoping that trend would continue and SSDs would go away. Now for sizes 1TB and smaller NVMe and SSD prices are very similar, but for 2TB the NVMe prices are twice that of SSD – presumably partly due to poor demand for 2TB NVMe. There are also no NVMe devices larger than 2TB on sale at MSY (a store which caters to home stuff not special server equipment) but SSDs go up to 8TB.

It seems that NVMe is only really suitable for workstation storage and for cache etc on a server. So SATA SSDs will be around for a while.

Small Servers

There are a range of low end servers which support a limited number of disks. Dell has 2 disk servers and 4 disk servers. If one of those had 8TB SSDs you could have 8TB of RAID-1 or 24TB of RAID-Z storage in a low end server. That covers the vast majority of servers (small business or workgroup servers tend to have less than 8TB of storage).

Larger Servers

Anandtech has an article on Seagates roadmap to 120TB disks [2]. They currently sell 20TB disks using HAMR technology

Currently the biggest disks that MSY sells are 10TB for $395, which was also the biggest disk they were selling last year. Last year MSY only sold SSDs up to 2TB in size (larger ones were available from other companies at much higher prices), now they sell 8TB SSDs for $949 (4* capacity increase in less than a year). Seagate is planning 30TB disks for 2023, if SSDs continue to increase in capacity by 4* per year we could have 128TB SSDs in 2023. If you needed a server with 100TB of storage then having 2 or 3 SSDs in a RAID array would be much easier to manage and faster than 4*30TB disks in an array.

When you have a server with many disks you can expect to have more disk failures due to vibration. One time I built a server with 18 disks and took disks from 2 smaller servers that had 4 and 5 disks. The 9 disks which had been working reliably for years started having problems within weeks of running in the bigger server. This is one of the many reasons for paying extra for SSD storage.

Seagate is apparently planning 50TB disks for 2026 and 100TB disks for 2030. If that’s the best they can do then SSD vendors should be able to sell larger products sooner at prices that are competitive. Matching hard drive prices is not required, getting to less than 4* the price should be enough for most customers.

The Anandtech article is worth reading, it mentions some interesting features that Seagate are developing such as having 2 actuators (which they call Mach.2) so the drive can access 2 different tracks at the same time. That can double the performance of a disk, but that doesn’t change things much when SSDs are more than 100* faster. Presumably the Mach.2 disks will be SAS and incredibly expensive while providing significantly less performance than affordable SATA SSDs.

Computer Cases

In my last post I speculated on the appearance of smaller cases designed to not have DVD drives or 3.5″ hard drives. Such cases still haven’t appeared apart from special purpose machines like the NUC that were available last year.

It would be nice if we could get a new industry standard for smaller power supplies. Currently power supplies are expected to be almost 5 inches wide (due to the expectation of a 5.25″ DVD drive mounted horizontally). We need some industry standards for smaller PCs that aren’t like the NUC, the NUC is very nice, but most people who build their own PC need more space than that. I still think that planning on USB DVD drives is the right way to go. I’ve got 4PCs in my home that are regularly used and CDs and DVDs are used so rarely that sharing a single DVD drive among all 4 wouldn’t be a problem.

Conclusion

I’m tempted to get a couple of 4TB SSDs for my home server which cost $487 each, it currently has 2*500G SSDs and 3*4TB disks. I would have to remove some unused files but that’s probably not too hard to do as I have lots of old backups etc on there. Another possibility is to use 2*4TB SSDs for most stuff and 2*4TB disks for backups.

I’m recommending that all my clients only use SSDs for their storage. I only have one client with enough storage that disks are the only option (100TB of storage) but they moved all the functions of that server to AWS and use S3 for the storage. Now I don’t have any clients doing anything with storage that can’t be done in a better way on SSD for a price difference that’s easy for them to afford.

Affordable SSD also makes RAID-1 in workstations more viable. 2 disks in a PC is noisy if you have an office full of them and produces enough waste heat to be a reliability issue (most people don’t cool their offices adequately on weekends). 2 SSDs in a PC is no problem at all. As 500G SSDs are available for $73 it’s not a significant cost to install 2 of them in every PC in the office (more cost for my time than hardware). I generally won’t recommend that hard drives be replaced with SSDs in systems that are working well. But if a machine runs out of space then replacing it with SSDs in a RAID-1 is a good choice.

Moore’s law might cover SSDs, but it definitely doesn’t cover hard drives. Hard drives have fallen way behind developments of most other parts of computers over the last 30 years, hopefully they will go away soon.

Kevin RuddBBC Breakfast: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC BREAKFAST
12 APRIL 2021

The post BBC Breakfast: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddThe Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us

Australians should be proud of their success in suppressing and eliminating coronavirus so far. This is largely due to the efforts of state governments – Labor and Liberal – in containing local outbreaks through a combination of mandatory quarantines, temporary lockdowns and effective contact tracing. And the Australian people themselves have played the biggest part by making this strategy of containment, and eventual elimination, work.

The same cannot be said of the federal government’s vaccination strategy where they have politically trumpeted their success. The daily reality of the vaccination rollout strategy reveals a litany of policy and administrative failures.

Thirteen months into the Covid-19 crisis, the states collectively get a strong B-plus on virus containment; whereas the federal government gets a D-minus on its vaccine rollout.

With the states constitutionally responsible for most of the public health response, Scott Morrison’s main role was: to secure in advance sufficient international and domestic vaccine supply; to do so from multiple vaccine developers to mitigate against the risks of individual vaccines failing; and to organise in advance a distribution strategy that would get the vaccine to the people as rapidly as possible.

On this core responsibility, Morrison has failed. His strategy, once again, is a political strategy. It has been to blame others – the states on delivery and the Europeans on supply.

Ultimately, the delivery of an effective vaccine to the people is the only effective long-term guarantee on a return to public health normality – and therefore economic normality, including the opening of our international borders.

We are now in a race against time to immunise our population, overcome this virus, and start the task of rebuilding from the pandemic. However, five months after Morrison announced Australians were “at the front of the queue” for vaccination, our rollout is presently ranked 104th in the world – sandwiched between Lebanon and Bangladesh – based on the latest seven-day average vaccination rate. This is a national disgrace.

Australians understand this is a race. It is a race between our vaccination rollout to eliminate the virus from our shores, and the rolling risk of the virus mutating. We are reminded of this every time the virus leaks out of hotel quarantine, and whenever we read heart-wrenching stories out of India or Brazil. We understand it when we learn about deadlier and more infectious variants emerging overseas that threaten not only those countries, but the roughly 36,000 stranded Australians who are still trickling home months too late. Each extra day they spend waiting for a quarantine place is another day they risk being exposed to a new variant they could bring back to Australia.

At present, we do not know when all Australians will be vaccinated against Covid-19. We don’t even know when all of our frontline doctors, nurses and quarantine workers will be vaccinated.

Early warnings that Australia should diversify its vaccine portfolio and avoid putting too many eggs in the AstraZeneca basket have been proven right.

And despite the prime minister telling us he has “secured” more Pfizer vaccines, to be delivered sometime around Christmas, the truth is no shipment is truly secure until it is arrived and ready for use.

The truth is we now have no vaccine strategy for half the country this year. Many countries will probably finish rolling out their vaccines before millions of us even get our first shot.

The early perceived political “successes” in Australia’s handling of the virus appears to have induced on Morrison’s part a breathtaking level of political complacency on vaccination strategy that borders on professional negligence. Morrison’s inner circle seem to inhabit an alternate reality. The key decision-makers (many of whom, it seems, have already been vaccinated) insist there is no race at all.

Despite earlier doubling down on unrealistic targets, Morrison now tries to gaslight Australians by claiming he didn’t actually say what we all heard him say. That we would be at the “front of the queue”, that we had access to the best vaccines in the world, and that we would have four million vaccinations done by the end of March. All bullshit.

So what could the prime minister now do? First, Morrison should own up to his responsibilities. Doctors can give excellent medical advice, but they aren’t necessarily experienced at public sector management, international diplomacy or working out how and when vaccines will be delivered to surgeries. Morrison’s job is to ensure that his health bureaucracy has a clear, workable communications plan with the nation’s medical workforce on vaccine distribution.

At the same time, Morrison should recognise that his own hyperactive political messaging is actually eroding the public’s confidence rather than boosting it.

One lesson from the pandemic’s first wave was that many Australians felt far more reassured by straight-talkers than evasive ministers and officials. Public confidence in the vaccination program isn’t eroded by people asking reasonable questions, but by the failure of governments to give straight and factual answers. Morrison and his officials could inspire more confidence if they were less shifty, more candid or simply vacated the public communications space entirely to the chief medical officer.

Second, Australia might look to the United States, which is weeks away from producing a surplus of vaccines. After a century of alliance, partnership and camaraderie, Washington may be able to provide a top-up to at least help vaccinate our most vulnerable frontline workers with the best vaccines available.

Third, we should be learning from our friends and allies about their experiences running mass vaccination centres. One of the major challenges associated with the shift to Pfizer from AstraZeneca is that it requires colder storage facilities and, perhaps most significantly, it requires the second shot to be given about three weeks after the first (rather than about three months for AstraZeneca). The government’s plan A – to mass-vaccinate millions through GP clinics and pharmacies – always seemed far-fetched. It seems inevitable that we may now need to pivot to mass vaccination centres like those in the US.

Fourth, the government must overhaul its local production effort. The pharmaceutical industry is reportedly rife with stories of Australian officials not answering correspondence, not returning phone calls and being generally uninterested in discussing vaccine purchases until several months into the pandemic, by which time those companies had promised billions of doses to other countries.

The same attitudes appear to have driven the government’s approach to our own country’s local mRNA experts. As the Guardian reported last week, “Frustrated experts say Australia could already be producing mRNA Covid vaccines if it had acted earlier”. Any sensible government would have been moving heaven and earth to help make this happen months ago, but not Morrison it would seem.

Australians are not fools. They understand just how vulnerable we remain. And we all know that waiting until Christmas isn’t good enough. As the actor David Wenham tweeted after Morrison’s press conference on Friday, “I just rang my local Priceline pharmacy and ordered 100 million doses of Pfizer vaccine. This is great news and puts Australia at the front of the queue again.” And David, as we all know, is a better actor than Scotty from Marketing will ever be.

First published in The Guardian

Image: Mike Bowers/The Guardian

The post The Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Exceptionally General

Andres noticed this pattern showing up in his company's code base, and at first didn't think much of it:

try { /*code here*/ } catch (Exception ex) { ExceptionManager.HandleException(ex); throw ex; }

It's not uncommon to have some sort of exception handling framework, maybe some standard logging, or the like. And even if you're using that, it may still make sense to re-throw the exception so another layer of the application can also handle it. But there was just something about it that got Andres's attention.

So Andres did what any curious programmer would do: checked the implementation of HandleException.

public static Exception HandleException(Exception ex) { if (ex is ArgumentException) return new InvalidOperationException("(ExceptionManager) Ocurrió un error en el argumento."); if (ex is ArgumentOutOfRangeException) return new InvalidOperationException("(ExceptionManager) An error ocurred because of an out of range value."); if (ex is ArgumentNullException) return new InvalidOperationException("(ExceptionManager) On error ocurred tried to access a null value."); if (ex is InvalidOperationException) return new InvalidOperationException("(ExceptionManager) On error ocurred performing an invalid operation."); if (ex is SmtpException) return new InvalidOperationException("(ExceptionManager)An error ocurred trying to send an email."); if (ex is SqlException) return new InvalidOperationException("(ExceptionManager) An error ocurred accessing data."); if (ex is IOException) return new InvalidOperationException("(ExceptionManager) An error ocurred accesing files."); return new InvalidOperationException("(ExceptionManager) An error ocurred while trying to perform the application."); }

So, what this code is trying to do is bad: it wants to destroy all the exception information and convert actual meaningful errors into generic InvalidOperationExceptions. If this code did what it intended to do, it'd be destroying the backtrace, concealing the origin of the error, and make the application significantly harder to debug.

Fortunately, this code actually doesn't do anything. It constructs the new objects, and returns them, but that return value isn't consumed, so it just vanishes into the ether. Then our actual exception handler rethrows the original exception.

The old saying is "no harm, no foul", and while this doesn't do any harm, it's definitely quite foul.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianJelmer Vernooij: The upstream ontologist

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The upstream ontologist is a project that extracts metadata about upstream projects in a consistent format. It does this with a combination of heuristics and reading ecosystem-specific metadata files, such as Python’s setup.py, rust’s Cargo.toml as well as e.g. scanning README files.

Supported Data Sources

It will extract information from a wide variety of sources, including:

Supported Fields

Fields that it currently provides include:

  • Homepage: homepage URL
  • Name: name of the upstream project
  • Contact: contact address of some sort of the upstream (e-mail, mailing list URL)
  • Repository: VCS URL
  • Repository-Browse: Web URL for viewing the VCS
  • Bug-Database: Bug database URL (for web viewing, generally)
  • Bug-Submit: URL to use to submit new bugs (either on the web or an e-mail address)
  • Screenshots: List of URLs with screenshots
  • Archive: Archive used - e.g. SourceForge
  • Security-Contact: e-mail or URL with instructions for reporting security issues
  • Documentation: Link to documentation on the web:
  • Wiki: Wiki URL
  • Summary: one-line description of the project
  • Description: longer description of the project
  • License: Single line license description (e.g. "GPL 2.0") as declared in the metadata[1]
  • Copyright: List of copyright holders
  • Version: Current upstream version
  • Security-MD: URL to markdown file with security policy

All data fields have a “certainty” associated with them (“certain”, “confident”, “likely” or “possible”), which gets set depending on how the data was derived or where it was found. If multiple possible values were found for a specific field, then the value with the highest certainty is taken.

Interface

The ontologist provides a high-level Python API as well as two command-line tools that can write output in two different formats:

For example, running guess-upstream-metadata on dulwich:

 % guess-upstream-metadata
 <string>:2: (INFO/1) Duplicate implicit target name: "contributing".
 Name: dulwich
 Repository: https://www.dulwich.io/code/
 X-Security-MD: https://github.com/dulwich/dulwich/tree/HEAD/SECURITY.md
 X-Version: 0.20.21
 Bug-Database: https://github.com/dulwich/dulwich/issues
 X-Summary: Python Git Library
 X-Description: |
   This is the Dulwich project.
   It aims to provide an interface to git repos (both local and remote) that
   doesn't call out to git directly but instead uses pure Python.
 X-License: Apache License, version 2 or GNU General Public License, version 2 or later.
 Bug-Submit: https://github.com/dulwich/dulwich/issues/new

Lintian-Brush

lintian-brush can update DEP-12-style debian/upstream/metadata files that hold information about the upstream project that is packaged as well as the Homepage in the debian/control file based on information provided by the upstream ontologist. By default, it only imports data with the highest certainty - you can override this by specifying the --uncertain command-line flag.

[1]Obviously this won't be able to describe the full licensing situation for many projects. Projects like scancode-toolkit are more appropriate for that.

Planet DebianVishal Gupta: Sikkim 101 for Backpackers

Host to Kanchenjunga, the world’s third-highest mountain peak and the endangered Red Panda, Sikkim is a state in northeastern India. Nestled between Nepal, Tibet (China), Bhutan and West Bengal (India), the state offers a smorgasbord of cultures and cuisines. That said, it’s hardly surprising that the old spice route meanders through western Sikkim, connecting Lhasa with the ports of Bengal. Although the latter could also be attributed to cardamom (kali elaichi), a perennial herb native to Sikkim, which the state is the second-largest producer of, globally. Lastly, having been to and lived in India, all my life, I can confidently say Sikkim is one of the cleanest & safest regions in India, making it ideal for first-time backpackers.

Brief History

  • 17th century: The Kingdom of Sikkim is founded by the Namgyal dynasty and ruled by Buddhist priest-kings known as the Chogyal.
  • 1890: Sikkim becomes a princely state of British India.
  • 1947: Sikkim continues its protectorate status with the Union of India, post-Indian-independence.
  • 1973: Anti-royalist riots take place in front of the Chogyal's palace, by Nepalis seeking greater representation.
  • 1975: Referendum leads to the deposition of the monarchy and Sikkim joins India as its 22nd state.

Languages

  • Official: English, Nepali, Sikkimese/Bhotia and Lepcha
  • Though Hindi and Nepali share the same script (Devanagari), they are not mutually intelligible. Yet, most people in Sikkim can understand and speak Hindi.

Ethnicity

  • Nepalis: Migrated in large numbers (from Nepal) and soon became the dominant community
  • Bhutias: People of Tibetan origin. Major inhabitants in Northern Sikkim.
  • Lepchas: Original inhabitants of Sikkim

Food

  • Tibetan/Nepali dishes (mostly consumed during winter)
    • Thukpa: Noodle soup, rich in spices and vegetables. Usually contains some form of meat. Common variations: Thenthuk and Gyathuk
    • Momos: Steamed or fried dumplings, usually with a meat filling.
    • Saadheko: Spicy marinated chicken salad.
    • Gundruk Soup: A soup made from Gundruk, a fermented leafy green vegetable.
    • Sinki : A fermented radish tap-root product, traditionally consumed as a base for soup and as a pickle. Eerily similar to Kimchi.
  • While pork and beef are pretty common, finding vegetarian dishes is equally easy.
  • Staple: Dal-Bhat with Subzi. Rice is a lot more common than wheat (rice) possibly due to greater carb content and proximity to West Bengal, India’s largest producer of Rice.
  • Good places to eat in Gangtok
    • Hamro Bhansa Ghar, Nimtho (Nepali)
    • Taste of Tibet
    • Dragon Wok (Chinese & Japanese)

Buddhism in Sikkim

  • Bayul Demojong (Sikkim), is the most sacred Land in the Himalayas as per the belief of the Northern Buddhists and various religious texts.
  • Sikkim was blessed by Guru Padmasambhava, the great Buddhist saint who visited Sikkim in the 8th century and consecrated the land.
  • However, Buddhism is said to have reached Sikkim only in the 17th century with the arrival of three Tibetan monks viz. Rigdzin Goedki Demthruchen, Mon Kathok Sonam Gyaltshen & Rigdzin Legden Je at Yuksom. Together, they established a Buddhist monastery.
  • In 1642 they crowned Phuntsog Namgyal as the first monarch of Sikkim and gave him the title of Chogyal, or Dharma Raja.
  • The faith became popular through its royal patronage and soon many villages had their own monastery.
  • Today Sikkim has over 200 monasteries.

Major monasteries

  • Rumtek Monastery, 20Km from Gangtok
  • Lingdum/Ranka Monastery, 17Km from Gangtok
  • Phodong Monastery, 28Km from Gangtok
  • Ralang Monastery, 10Km from Ravangla
  • Tsuklakhang Monastery, Royal Palace, Gangtok
  • Enchey Monastery, Gangtok
  • Tashiding Monastery, 35Km from Ravangla


Reaching Sikkim

  • Gangtok, being the capital, is easiest to reach amongst other regions, by public transport and shared cabs.
  • By Air:
    • Pakyong (PYG) :
      • Nearest airport from Gangtok (about 1 hour away)
      • Tabletop airport
      • Reserved cabs cost around INR 1200.
      • As of Apr 2021, the only flights to PYG are from IGI (Delhi) and CCU (Kolkata).
    • Bagdogra (IXB) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Larger airport with flights to most major Indian cities.
      • Reserved cabs cost about INR 3000. Shared cabs cost about INR 350.
  • By Train:
    • New Jalpaiguri (NJP) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Reserved cabs cost about INR 3000. Shared cabs from INR 350.
  • By Road:
    • NH10 connects Siliguri to Gangtok
    • If you can’t find buses plying to Gangtok directly, reach Siliguri and then take a cab to Gangtok.
  • Sikkim Nationalised Transport Div. also runs hourly buses between Siliguri and Gangtok and daily buses on other common routes. They’re cheaper than shared cabs.
  • Wizzride also operates shared cabs between Siliguri/Bagdogra/NJP, Gangtok and Darjeeling. They cost about the same as shared cabs but pack in half as many people in “luxury cars” (Innova, Xylo, etc.) and are hence more comfortable.

Gangtok

  • Time needed: 1D/1N
  • Places to visit:
    • Hanuman Tok
    • Ganesh Tok
    • Tashi View Point [6,800ft]
    • MG Marg
    • Sikkim Zoo
    • Gangtok Ropeway
    • Enchey Monastery
    • Tsuklakhang Palace & Monastery
  • Hostels: Tagalong Backpackers (would strongly recommend), Zostel Gangtok
  • Places to chill: Travel Cafe, Café Live & Loud and Gangtok Groove
  • Places to shop: Lal Market and MG Marg

Getting Around

  • Taxis operate on a reserved or shared basis. In case of the latter, you can pool with other commuters your taxis will pick up and drop en-route.
  • Naturally shared taxis only operate on popular routes. The easiest way to get around Gangtok is to catch a shared cab from MG Marg.
  • Reserved taxis for Gangtok sightseeing cost around INR 1000-1500, depending upon the spots you’d like to see
  • Key taxi/bus stands :
    • Deorali stand: For Darjeeling, Siliguri, Kalimpong
    • Vajra stand: For North & East Sikkim (Tsomgo Lake & Nathula)
    • Rumtek taxi: For Ravangla, Pelling, Namchi, Geyzing, Jorethang and Singtam.

Exploring Gangtok on an MTB


North Sikkim

  • The easiest & most economical way to explore North Sikkim is the 3D/2N package offered by shared-cab drivers.
  • This includes food, permits, cab rides and accommodation (1N in Lachen and 1N in Lachung)
  • The accommodation on both nights are at homestays with bare necessities, so keep your hopes low.
  • In the spirit of sustainable tourism, you’ll be asked to discard single-use plastic bottles, so please carry a bottle that you can refill along the way.
  • Zero Point and Gurdongmer Lake are snow-capped throughout the year

3D/2N Shared-cab Package Itinerary

  • Day 1
    • Gangtok (10am) - Chungthang - Lachung (stay)
  • Day 2
    • Pre-lunch : Lachung (6am) - Yumthang Valley [12,139ft] - Zero Point - Lachung [15,300ft]
    • Post-lunch : Lachung - Chungthang - Lachen (stay)
  • Day 3
    • Pre-lunch : Lachen (5am) - Kala Patthar - Gurdongmer Lake [16,910ft] - Lachen
    • Post-lunch : Lachen - Chungthang - Gangtok (7pm)
  • This itinerary is idealistic and depends on the level of snowfall.
  • Some drivers might switch up Day 2 and 3 itineraries by visiting Lachen and then Lachung, depending upon the weather.
  • Areas beyond Lachen & Lachung are heavily militarized since the Indo-China border is only a few miles away.


East Sikkim

Zuluk and Silk Route

  • Time needed: 2D/1N
  • Zuluk [9,400ft] is a small hamlet with an excellent view of the eastern Himalayan range including the Kanchenjunga.
  • Was once a transit point to the historic Silk Route from Tibet (Lhasa) to India (West Bengal).
  • The drive from Gangtok to Zuluk takes at least four hours. Hence, it makes sense to spend the night at a homestay and space out your trip to Zuluk

Tsomgo Lake and Nathula

  • Time Needed : 1D
  • A Protected Area Permit is required to visit these places, due to their proximity to the Chinese border
  • Tsomgo/Chhangu Lake [12,313ft]
    • Glacial lake, 40 km from Gangtok.
    • Remains frozen during the winter season.
    • You can also ride on the back of a Yak for INR 300
  • Baba Mandir
    • An old temple dedicated to Baba Harbhajan Singh, a Sepoy in the 23rd Regiment, who died in 1962 near the Nathu La during Indo – China war.
  • Nathula Pass [14,450ft]
    • Located on the Indo-Tibetan border crossing of the Old Silk Route, it is one of the three open trading posts between India and China.
    • Plays a key role in the Sino-Indian Trade and also serves as an official Border Personnel Meeting(BPM) Point.
    • May get cordoned off by the Indian Army in event of heavy snowfall or for other security reasons.


West Sikkim

  • Time needed: 3N/1N
  • Hostels at Pelling : Mochilerro Ostillo

Itinerary

Day 1: Gangtok - Ravangla - Pelling

  • Leave Gangtok early, for Ravangla through the Temi Tea Estate route.
  • Spend some time at the tea garden and then visit Buddha Park at Ravangla
  • Head to Pelling from Ravangla

Day 2: Pelling sightseeing

  • Hire a cab and visit Skywalk, Pemayangtse Monastery, Rabdentse Ruins, Kecheopalri Lake, Kanchenjunga Falls.

Day 3: Pelling - Gangtok/Siliguri

  • Wake up early to catch a glimpse of Kanchenjunga at the Pelling Helipad around sunrise
  • Head back to Gangtok on a shared-cab
  • You could take a bus/taxi back to Siliguri if Pelling is your last stop.

Darjeeling

  • In my opinion, Darjeeling is lovely for a two-day detour on your way back to Bagdogra/Siliguri and not any longer (unless you’re a Bengali couple on a honeymoon)
  • Once a part of Sikkim, Darjeeling was ceded to the East India Company after a series of wars, with Sikkim briefly receiving a grant from EIC for “gifting” Darjeeling to the latter
  • Post-independence, Darjeeling was merged with the state of West Bengal.

Itinerary

Day 1 :

  • Take a cab from Gangtok to Darjeeling (shared-cabs cost INR 300 per seat)
  • Reach Darjeeling by noon and check in to your Hostel. I stayed at Hideout.
  • Spend the evening visiting either a monastery (or the Batasia Loop), Nehru Road and Mall Road.
  • Grab dinner at Glenary whilst listening to live music.

Day 2:

  • Wake up early to catch the sunrise and a glimpse of Kanchenjunga at Tiger Hill. Since Tiger Hill is 10km from Darjeeling and requires a permit, book your taxi in advance.
  • Alternatively, if you don’t want to get up at 4am or shell out INR1500 on the cab to Tiger Hill, walk to the Kanchenjunga View Point down Mall Road
  • Next, queue up outside Keventers for breakfast with a view in a century-old cafe
  • Get a cab at Gandhi Road and visit a tea garden (Happy Valley is the closest) and the Ropeway. I was lucky to meet 6 other backpackers at my hostel and we ended up pooling the cab at INR 200 per person, with INR 1400 being on the expensive side, but you could bargain.
  • Get lunch, buy some tea at Golden Tips, pack your bags and hop on a shared-cab back to Siliguri. It took us about 4hrs to reach Siliguri, with an hour to spare before my train.
  • If you’ve still got time on your hands, then check out the Peace Pagoda and the Darjeeling Himalayan Railway (Toy Train). At INR 1500, I found the latter to be too expensive and skipped it.


Tips and hacks

  • Download offline maps, especially when you’re exploring Northern Sikkim.
  • Food and booze are the cheapest in Gangtok. Stash up before heading to other regions.
  • Keep your Aadhar/Passport handy since you need permits to travel to North & East Sikkim.
  • In rural areas and some cafes, you may get to try Rhododendron Wine, made from Rhododendron arboreum a.k.a Gurans. Its production is a little hush-hush since the flower is considered holy and is also the National Flower of Nepal.
  • If you don’t want to invest in a new jacket, boots or a pair of gloves, you can always rent them at nominal rates from your hotel or little stores around tourist sites.
  • Check the weather of a region before heading there. Low visibility and precipitation can quite literally dampen your experience.
  • Keep your itinerary flexible to accommodate for rest and impromptu plans.
  • Shops and restaurants close by 8pm in Sikkim and Darjeeling. Plan for the same.

Carry…

  • a couple of extra pairs of socks (woollen, if possible)
  • a pair of slippers to wear indoors
  • a reusable water bottle
  • an umbrella
  • a power bank
  • a couple of tablets of Diamox. Helps deal with altitude sickness
  • extra clothes and wet bags since you may not get a chance to wash/dry your clothes
  • a few passport size photographs

Shared-cab hacks

  • Intercity rides can be exhausting. If you can afford it, pay for an additional seat.
  • Call shotgun on the drives beyond Lachen and Lachung. The views are breathtaking.
  • Return cabs tend to be cheaper (WB cabs travelling from SK and vice-versa)

Cost

  • My median daily expenditure (back when I went to Sikkim in early March 2021) was INR 1350.
  • This includes stay (bunk bed), food, wine and transit (shared cabs)
  • In my defence, I splurged on food, wine and extra seats in shared cabs, but if you’re on a budget, you could easily get by on INR 1 - 1.2k per day.
  • For a 9-day trip, I ended up shelling out nearly INR 15k, including 2AC trains to & from Kolkata
  • Note : Summer (March to May) and Autumn (October to December) are peak seasons, and thereby more expensive to travel around.

Souvenirs and things you should buy

Buddhist souvenirs :

  • Colourful Prayer Flags (great for tying on bikes or behind car windshields)
  • Miniature Prayer/Mani Wheels
  • Lucky Charms, Pendants and Key Chains
  • Cham Dance masks and robes
  • Singing Bowls
  • Common symbols: Om mani padme hum, Ashtamangala, Zodiac signs

Handicrafts & Handlooms

  • Tibetan Yak Wool shawls, scarfs and carpets
  • Sikkimese Ceramic cups
  • Thangka Paintings

Edibles

  • Darjeeling Tea (usually brewed and not boiled)
  • Wine (Arucha Peach & Rhododendron)
  • Dalle Khursani (Chilli) Paste and Pickle

Header Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

Planet DebianJonathan Dowland: 2020 in short fiction

Cover for *Episodes*

Following on from 2020 in Fiction: In 2020 I read a couple of collections of short fiction from some of my favourite authors.

I started the year with Christopher Priest's Episodes. The stories within are collected from throughout his long career, and vary in style and tone. Priest wrote new little prologues and epilogues for each of the stories, explaining the context in which they were written. I really enjoyed this additional view into their construction.

Cover for *Adam Robots*

By contrast, Adam Robert's Adam Robots presents the stories on their own terms. Each of the stories is written in a different mode: one as golden-age SF, another as a kind of Cyberpunk, for example, although they all blend or confound sub-genres to some degree. I'm not clever enough to have decoded all their secrets on a first read, and I would have appreciated some "Cliff's Notes” on any deeper meaning or intent.

Cover for *Exhalation*

Ted Chiang's Exhalation was up to the fantastic standard of his earlier collection and had some extremely thoughtful explorations of philosophical ideas. All the stories are strong but one stuck in my mind the longest: Omphalos)…

With my daughter I finished three of Terry Pratchett's short story collections aimed at children: Dragon at Crumbling Castle; The Witch's Vacuum Cleaner and The Time-Travelling Caveman. If you are a Pratchett fan and you've overlooked these because they're aimed at children, take another look. The quality varies, but there are some true gems in these. Several stories take place in common settings, either the town of Blackbury, in Gritshire (and the adjacent Even Moor), or the Welsh border-town of Llandanffwnfafegettupagogo. The sad thing was knowing that once I'd finished them (and the fourth, Father Christmas's Fake Beard) that was it: there will be no more.

Cover for Interzone, issue 277

8/31 of the "books" I read in 2020 were issues of Interzone. Counting them as "books" for my annual reading goal has encouraged me to read full issues, whereas before I would likely have only read a couple of stories from each issue. Reading full issues has rekindled the enjoyment I got out of it when I first discovered the magazine at the turn of the Century. I am starting to recognise stories by authors that have written stories in other issues, as well as common themes from the current era weaving their way into the work (Trump, Brexit, etc.) No doubt the Pandemic will leave its mark on 2021's stories.

Planet DebianJunichi Uekawa: Wrote a timezone checker page.

Wrote a timezone checker page. timezone. Shows the current time in blue line. I haven't made anything configurable but will think about it later.

,

Planet DebianCharles Plessy: Debian Bullseye: more open

Debian Bullseye will provide the command /usr/bin/open for your greatest comfort at the command line. On a system with a graphical desktop environment, the command should have a similar result as when opening a document from a mouse-and-click file browser.

Technically, /usr/bin/open is a symbolic link managed by update-alternatives to point towards xdg-open if available and otherwise run-mailcap.

Kevin RuddBBC World: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC WORLD NEWS
9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC World: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddBBC Newsnight: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC ‘NEWSNIGHT’
9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC Newsnight: HRH Prince Philip appeared first on Kevin Rudd.

,

Kevin RuddStatement: HRH The Duke of Edinburgh

Thérèse and I are deeply saddened by the news of the death of His Royal Highness Prince Philip.

We would like to extend our deepest condolences to his lifelong partner Her Majesty The Queen, and other members of the Royal Family.

Prince Philip lived to a venerable age. Both Thérèse and I had the opportunity to meet and converse with both His Royal Highness and Her Majesty on a number of occasions. It was plain from those conversations that Prince Philip had a deep and abiding affection for Australia.

It matters not whether Australians are republicans or monarchists, Prince Philip’s passing is a very sad day for the Royal Family who, like all families, will be grieving deeply the loss of a loving husband, father, grandfather, and great-grandfather.

Our thoughts should all be with Her Majesty The Queen at this time.

Image: ABC / Her Royal Highness Queen Elizabeth II and the Duke of Edinburgh on Royal train at Bathurst, NSW, while on tour, February 1954.

The post Statement: HRH The Duke of Edinburgh appeared first on Kevin Rudd.

Cryptogram Friday Squid Blogging: Squid-Shaped Bike Rack

There’s a new squid-shaped bike rack in Ballard, WA.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Blobs of Squid Eggs Found Near Norway

Divers find three-foot “blobs” — egg sacs of the squid Illex coindetii — off the coast of Norway.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Jurassic Squid and Prey

A 180-million-year-old Vampire squid ancestor was fossilized along with its prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddSubmission: Parliamentary Petitions

In February 2021, I made a written submission to the Australian House of Representatives Inquiry into aspects of its petitioning system.

The Inquiry was launched following technological failures by the Department of Parliamentary Services which resulted in many thousands of Australians being unable to sign the Petition for a Royal Commission to ensure the strength and diversity of Australian news media.

The Inquiry is also examining a malicious cyberattack on that petition by a right-wing activist, inspired by a segment he saw on Murdoch’s Sky News.

Click here to read my submission.

The post Submission: Parliamentary Petitions appeared first on Kevin Rudd.

Worse Than FailureError'd: Punfree Friday

Today's Error'd submissions are not so much WTF as simply "TF?" Please try to explain the thought process in the comments, if you can.

Plaid-hat hacker Mark writes "Just came across this for a Microsoft Security portal. Still trying to figure it out." Me, I just want to know what happens when you click "Audio".

 

Reader Wesley faintly damns the sender "Hey, at least they are being honest!" But is this real, or is it a phishing scam? And if it's real phishing, can it really be honest?

 

Surveyed David misses last week's trivial "None of the above". So do I, David.

 

Diligently searching, keyboard sleuth Paul T suspects his None key might be somewhere near his Any key, but he can't find that one either.

 

Finally, an EU resident who wishes to remain anonymous has warned us "Vodafone doesn't allow IT jokes to kids... And they might be right". Where did we go wrong, Vodafone nannies? Was it the C++?

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Worse Than FailureCodeSOD: A True Leader's Enhancement

Chuck had some perfectly acceptable C# code running in production. There was nothing terrible about it. It may not be the absolute "best" way to build this logic in terms of being easy to change and maintain in the future, but nothing about it is WTF-y.

if (report.spName == "thisReport" || report.spName == "thatReport") { LoadUI1(); } else if (report.spName == "thirdReport" || report.spName == "thirdReportButMoreSpecific") { LoadUI2(); } else { LoadUI3(); }

At worst, we could argue that using string-ly typed logic for deciding the UI state is suboptimal, but this code is hardly "burn it down" bad.

Fortunately, Chuck's team leader didn't like this code. So that team leader "fixed" it.

if ("thisReport, thatReport".Contains(report.spName)) { LoadUI1(); } else if ("thirdReport, thirdReportButMoreSpecific".Contains(spName)) { LoadUI2(); } else { LoadUI3(); }

So we keep the string-ly typed logic, but instead of straight equality comparisons, we change it into a Contains check. A Contains check on a string which contains all the possible report names, as a comma separated list. Not only is it less readable, and peforms significantly worse, but if spName is an invalid value, we might get some fun, unexpected results.

Perhaps the team lead was going for an ["array", "of", "allowed", "names"] and missed?

The end result though is that this change definitely made the code worse. The team lead, though, doesn't get their code reviewed by their peers. They're the leader, they have no peers, clearly.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Worse Than FailureCodeSOD: We All Expire

Code, like anything else, ages with time. Each minor change we make to a piece of already-in-use software speeds up that process. And while a piece of software can be running for decades unchanged, its utility will still decline over time, as its user interface becomes more distant from common practices, as the requirements drift from their intent, and people forget what the original purpose of certain features even was.

Code ages, but some code is born with an expiration date.

For example, at Jose's company, each year is assigned a letter label. The reasons are obscure, and rooted in somebody's project planning process, but the year 2000 was "A". The year 2001 was "B", and so on. 2025 would be "Z", and then 2026 would roll back over to "A".

At least, that's what the requirement was. What was implemented was a bit different.

if DateTime.Today.year = 2010 then year = "K" else if DateTime.Today.year = 2011 then year = "L" else if DateTime.Today.year = 2012 then year = "M" else if DateTime.Today.year = 2013 then year = "N" else if DateTime.Today.year = 2014 then year = "O" else if DateTime.Today.year = 2015 then year = "P" else if DateTime.Today.year = 2016 then year = "Q" else if DateTime.Today.year = 2017 then year = "R" else if DateTime.Today.year = 2018 then year = "S" else if DateTime.Today.year = 2019 then year = "T" else if DateTime.Today.year = 2020 then year = "U" else if DateTime.Today.year = 2021 then year = "V" else if DateTime.Today.year = 2022 then year = "W" else if DateTime.Today.year = 2023 then year = "X" else if DateTime.Today.year = 2024 then year = "Y" else year = "Z" end if

For want of a mod, 2026 was lost. But hey, this code was clearly written in 2010, which means it will work just fine for a decade and a half. We should all be so lucky.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Google’s Project Zero Finds a Nation-State Zero-Day Operation

Google’s Project Zero discovered, and caused to be patched, eleven zero-day exploits against Chrome, Safari, Microsoft Windows, and iOS. This seems to have been exploited by “Western government operatives actively conducting a counterterrorism operation”:

The exploits, which went back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed.

[…]

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

,

Krebs on SecurityAre You One of the 533M People Who Got Facebooked?

Ne’er-do-wells leaked personal data — including phone numbers — for some 553 million Facebook users this week. Facebook says the data was collected before 2020 when it changed things to prevent such information from being scraped from profiles. To my mind, this just reinforces the need to remove mobile phone numbers from all of your online accounts wherever feasible. Meanwhile, if you’re a Facebook product user and want to learn if your data was leaked, there are easy ways to find out.

The HaveIBeenPwned project, which collects and analyzes hundreds of database dumps containing information about billions of leaked accounts, has incorporated the data into his service. Facebook users can enter the mobile number (in international format) associated with their account and see if those digits were exposed in the new data dump (HIBP doesn’t show you any data, just gives you a yes/no on whether your data shows up).

The phone number associated with my late Facebook account (which I deleted in Jan. 2020) was not in HaveIBeenPwned, but then again Facebook claims to have more than 2.7 billion active monthly users.

It appears much of this database has been kicking around the cybercrime underground in one form or another since last summer at least. According to a Jan. 14, 2021 Twitter post from Under the Breach’s Alon Gal, the 533 million Facebook accounts database was first put up for sale back in June 2020, offering Facebook profile data from 100 countries, including name, mobile number, gender, occupation, city, country, and marital status.

Under The Breach also said back in January that someone had created a Telegram bot allowing users to query the database for a low fee, and enabling people to find the phone numbers linked to a large number of Facebook accounts.

A cybercrime forum ad from June 2020 selling a database of 533 Million Facebook users. Image: @UnderTheBreach

Many people may not consider their mobile phone number to be private information, but there is a world of misery that bad guys, stalkers and creeps can visit on your life just by knowing your mobile number. Sure they could call you and harass you that way, but more likely they will see how many of your other accounts — at major email providers and social networking sites like Facebook, Twitter, Instagram, e.g. — rely on that number for password resets.

From there, the target is primed for a SIM-swapping attack, where thieves trick or bribe employees at mobile phone stores into transferring ownership of the target’s phone number to a mobile device controlled by the attackers. From there, the bad guys can reset the password of any account to which that mobile number is tied, and of course intercept any one-time tokens sent to that number for the purposes of multi-factor authentication.

Or the attackers take advantage of some other privacy and security wrinkle in the way SMS text messages are handled. Last month, a security researcher showed how easy it was to abuse services aimed at helping celebrities manage their social media profiles to intercept SMS messages for any mobile user. That weakness has supposedly been patched for all the major wireless carriers now, but it really makes you question the ongoing sanity of relying on the Internet equivalent of postcards (SMS) to securely handle quite sensitive information.

My advice has long been to remove phone numbers from your online accounts wherever you can, and avoid selecting SMS or phone calls for second factor or one-time codes. Phone numbers were never designed to be identity documents, but that’s effectively what they’ve become. It’s time we stopped letting everyone treat them that way.

Any online accounts that you value should be secured with a unique and strong password, as well as the most robust form of multi-factor authentication available. Usually, this is a mobile app like Authy or Google Authenticator that generates a one-time code. Some sites like Twitter and Facebook now support even more robust options — such as physical security keys.

Removing your phone number may be even more important for any email accounts you may have. Sign up with any service online, and it will almost certainly require you to supply an email address. In nearly all cases, the person who is in control of that address can reset the password of any associated services or accounts– merely by requesting a password reset email.

Unfortunately, many email providers still let users reset their account passwords by having a link sent via text to the phone number on file for the account. So remove the phone number as a backup for your email account, and ensure a more robust second factor is selected for all available account recovery options.

Here’s the thing: Most online services require users to supply a mobile phone number when setting up the account, but do not require the number to remain associated with the account after it is established. I advise readers to remove their phone numbers from accounts wherever possible, and to take advantage of a mobile app to generate any one-time codes for multifactor authentication.

Why did KrebsOnSecurity delete its Facebook account early last year? Sure, it might have had something to do with the incessant stream of breaches, leaks and privacy betrayals by Facebook over the years. But what really bothered me were the number of people who felt comfortable sharing extraordinarily sensitive information with me on things like Facebook Messenger, all the while expecting that I can vouch for the privacy and security of that message just by virtue of my presence on the platform.

In case readers want to get in touch for any reason, my email here is krebsonsecurity at gmail dot com, or krebsonsecurity at protonmail.com. I also respond at Krebswickr on the encrypted messaging platform Wickr.

Worse Than FailureCodeSOD: He Sed What?

Today's code is only part of the WTF. The code is bad, it's incorrect, but the mistake is simple and easy to make.

Lowell was recently digging into a broken feature in a legacy C application. The specific error was a failure when invoking a sed command from inside the application.

// use the following to remove embedded newlines: sed ':a;N;$!ba;s/\n,/,/g' snprintf(command, sizeof(command),"sed -i ':a;N;$!ba;s/\n,/,/g' %s/%s.txt",path,file); system(command);

While regular expressions have a reputation for being cryptic, this one is at least easy to read- or at least, easier to read than the pile of sed flags that precede it. s/\n,/,/g finds every newline character followed by a comma and replaces it ith just a comma. At least, that was the intent, but there's one problem with that- we're not calling sed from inside the shell.

We're calling it from C, and C is going to interpret the \n as a newline itself. The actual command which gets sent to the shell is:

sed -i ':a;N;$!ba;s/ ,/,/g' /var/tmp/backup.txt

This completely broke one of the features of this legacy application. Specifically, as you might guess from the shell command above, the backup functionality. The application had the ability to backup its data in a way that would let users revert to prior application states or migrate to other hosts. The commit which introduced the sed call broke that feature.

In 2018. For nearly three years, all of the customers running this application have been running it without backups.

Lowell sums it up:

The real WTF may be the first part of my reply: "Looks like backup was broken by a commit in December 2018. The 2014 version should work."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityRansom Gangs Emailing Victim Customers for Leverage

Some of the top ransomware gangs are deploying a new pressure tactic to push more victim organizations into paying an extortion demand: Emailing the victim’s customers and partners directly, warning that their data will be leaked to the dark web unless they can convince the victim firm to pay up.

This letter is from the Clop ransomware gang, putting pressure on a recent victim named on Clop’s dark web shaming site.

“Good day! If you received this letter, you are a customer, buyer, partner or employee of [victim],” the missive reads. “The company has been hacked, data has been stolen and will soon be released as the company refuses to protect its peoples’ data.”

“We inform you that information about you will be published on the darknet [link to dark web victim shaming page] if the company does not contact us,” the message concludes. “Call or write to this store and ask to protect your privacy!!!!”

The message above was sent to a customer of RaceTrac Petroleum, an Atlanta company that operates more than 650 retail gasoline convenience stores in 12 southeastern states. The person who shared that screenshot above isn’t a distributor or partner of RaceTrac, but they said they are a RaceTrac rewards member, so the company definitely has their email address and other information.

Several gigabytes of the company’s files — including employee tax and financial records — have been posted to the victim shaming site for the Clop ransomware gang.

In response to questions from KrebsOnSecurity, RaceTrac said it was recently impacted by a security incident affecting one of its third-party service providers, Accellion Inc.

For the past few months, attackers have been exploiting a a zero-day vulnerability in Accellion File Transfer Appliance (FTA) software, a flaw that has been seized upon by Clop to break into dozens of other major companies like oil giant Shell and security firm Qualys.

“By exploiting a previously undetected software vulnerability, unauthorized parties were able to access a subset of RaceTrac data stored in the Accellion File Transfer Service, including email addresses and first names of some of our RaceTrac Rewards Loyalty users,” the company wrote. “This incident was limited to the aforementioned Accellion services and did not impact RaceTrac’s corporate network. The systems used for processing guest credit, debit and RaceTrac Rewards transactions were not impacted.”

The same extortion pressure email has been going out to people associated with the University of California, which was one of several large U.S. universities that got hit with Clop ransomware recently. Most of those university ransomware incidents appeared to be tied to attacks on attacks on the same Accellion vulnerability, and the company has acknowledged roughly a third of its customers on that appliance got compromised as a result.

Clop is one of several ransom gangs that will demand two ransoms: One for a digital key needed to unlock computers and data from file encryption, and a second to avoid having stolen data published or sold online. That means even victims who opt not to pay to get their files and servers back still have to decide whether to pay the second ransom to protect the privacy of their customers.

As I noted in Why Paying to Delete Stolen Data is Bonkers, leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.

The email in the screenshot above differs slightly from those covered last week by Bleeping Computer, which was the first to spot the new victim notification wrinkle. Those emails say that the recipient is being contacted as they are a customer of the store, and their personal data, including phone numbers, email addresses, and credit card information, will soon be published if the store does not pay a ransom, writes Lawrence Abrams.

“Perhaps you bought something there and left your personal data. Such as phone, email, address, credit card information and social security number,” the Clop gang states in the email.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said the direct appeals to victim customers is a natural extension of other advertising efforts by the ransomware gangs, which recently included using hacked Facebook accounts to post victim shaming advertisements.

Wosar said Clop isn’t the only ransomware gang emailing victim customers.

“Clop likes to do it and I think REvil started as well,” Wosar said.

Earlier this month, Bleeping Computer reported that the REvil ransomware operation was planning on launching crippling distributed denial of service (DDoS) attacks against victims, or making VOIP calls to victims’ customers to apply further pressure.

“Sadly, regardless of whether a ransom is paid, consumers whose data has been stolen are still at risk as there is no way of knowing if ransomware gangs delete the data as they promise,” Abrams wrote.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 01)

This week on my podcast, part one of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Worse Than FailureCodeSOD: Switching Your Template

Many years ago, Kari got a job at one of those small companies that lives in the shadow of a university. It was founded by graduates of that university, mostly recruited from that university, and the CEO was a fixture at alumni events.

Kari was a rare hire not from that university, but she knew the school had a reputation for having an excellent software engineering program. She was prepared to be a little behind her fellow employees, skills-wise, but looked forward to catching up.

Kari was unprepared for the kind of code quality these developers produced.

First, let's take a look at how they, as a company standard, leveraged C++ templates. C++ templates are similar (though more complicated) than the generics you find in other languages. Defining a method like void myfunction<T>(T param) creates a function which can be applied to any type, so myfunction(5) and myfunction("a string") and myfunction(someClassVariable) are all valid. The beauty, of course, is that you can write a template method once, but use it in many ways.

Kari provided some generic examples of how her employer leveraged this feature, to give us a sense of what the codebase was like:

enum SomeType { SOMETYPE_TYPE1 // ... more types here }; template<SomeType t> void Function1(); template<> void Function1<SOMETYPE_TYPE1>() { // Implementation of Function1 for TYPE1 as a template specialization } template<> void Function1<SOMETYPE_TYPE2>() { // Implementation of Function1 for TYPE2 as a template specialization } // ... more specializations here void CallFunction1(SomeType type) { switch(type) { case SOMETYPE_TYPE1: Function1<SOMETYPE_TYPE1>(); break; case SOMETYPE_TYPE2: Function1<SOMETYPE_TYPE2>(); break; // ... I think you get the picture default: assert(false); break; } }

This technique allows them to define multiple versions of a method call Function1, and then decide which version needs to be invoked by using a type flag and a switch statement. This simultaneously misses the point of templates and overloading. And honestly, while I'm not sure exactly what business problem they were trying to solve, this is a textbook case for using polymorphism to dispatch calls to concrete implementations via inheritance.

Which raises the question, if this is how they do templates, how do they do inheritance? Oh, you know how they do inheritance.

enum ClassType { CLASSTYPE_CHILD1 // ... more enum values here }; class Parent { public: Parent(ClassType type) : type_(type) { } ClassType get_type() const { return type_; } bool IsXYZSupported() const { switch(type_) { case CHILD1: return true; // ... more cases here default: assert(false); return false; } } private: ClassType type_; }; class Child1 : public Parent { public: Child1() : Parent(CLASSTYPE_CHILD1) { } }; // Somewhere else in the application, buried deep within hundreds of lines of obscurity... bool IsABCSupported(Parent *obj) { switch(obj->get_type()) { case CLASSTYPE_CHILD1: return true; // ... more cases here default: assert(false); return false; } }

Yes, once again, we have a type flag and a switch statement. Inheritance would do this for us. They're reinvented the wheel, but this time, it's a triangle. An isosceles triangle, at that.

All that's bad, but the thing which elevates this code to transcendentally bad are the locations of the definitions of IsXYZSupported and IsABCSupported. IsXYZSupported is unnecessary, something which shouldn't exist, but at least it's in the definition of the class. Well, it's in the definition of the parent class, which means the parent has to know each of its children, which opens up a whole can of worms regarding fragility. But there are also stray methods like IsABCSupported, defined someplace else, to do something else, and this means that doing any tampering to the class hierarchy means tracking down possibly hundreds of random methods scattered in the code base.

And, if you're wondering how long these switch statements could get? Kari says: "The record I saw was a switch with approximately 100 cases."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityUbiquiti All But Confirms Breach Response Iniquity

For four days this past week, Internet-of-Things giant Ubiquiti did not respond to requests for comment on a whistleblower’s allegations the company had massively downplayed a “catastrophic” two-month breach ending in January to save its stock price, and that Ubiquiti’s insinuation that a third-party was to blame was a fabrication. I was happy to add their eventual public response to the top of Tuesday’s story on the whistleblower’s claims, but their statement deserves a post of its own because it actually confirms and reinforces those claims.

Ubiquiti’s IoT gear includes things like WiFi routers, security cameras, and network video recorders. Their products have long been popular with security nerds and DIY types because they make it easy for users to build their own internal IoT networks without spending many thousands of dollars.

But some of that shine started to come off recently for Ubiquiti’s more security-conscious customers after the company began pushing everyone to use a unified authentication and access solution that makes it difficult to administer these devices without first authenticating to Ubiquiti’s cloud infrastructure.

All of a sudden, local-only networks were being connected to Ubiquiti’s cloud, giving rise to countless discussion threads on Ubiquiti’s user forums from customers upset over the potential for introducing new security risks.

And on Jan. 11, Ubiquiti gave weight to that angst: It told customers to reset their passwords and enable multifactor authentication, saying a breach involving a third-party cloud provider might have exposed user account data. Ubiquiti told customers they were “not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed.”

Ubiquiti’s notice on Jan. 12, 2021.

On Tuesday, KrebsOnSecurity reported that a source who participated in the response to the breach said Ubiquiti should have immediately invalidated all credentials because all of the company’s key administrator passwords had been compromised as well. The whistleblower also said Ubiquiti never kept any logs of who was accessing its databases.

The whistleblower, “Adam,” spoke on condition of anonymity for fear of reprisals from Ubiquiti. Adam said the place where those key administrator credentials were compromised — Ubiquiti’s presence on Amazon’s Web Services (AWS) cloud services — was in fact the “third party” blamed for the hack.

From Tuesday’s piece:

“In reality, Adam said, the attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there.

“They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,” Adam said.

Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.

Such access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

Ubiquiti finally responded on Mar. 31, in a post signed “Team UI” on the company’s community forum online.

“Nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11. In response to this incident, we leveraged external incident response experts to conduct a thorough investigation to ensure the attacker was locked out of our systems.”

“These experts identified no evidence that customer information was accessed, or even targeted. The attacker, who unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials, never claimed to have accessed any customer information. This, along with other evidence, is why we believe that customer data was not the target of, or otherwise accessed in connection with, the incident.”

Ubiquiti’s response this week on its user forum.

Ubiquiti also hinted it had an idea of who was behind the attack, saying it has “well-developed evidence that the perpetrator is an individual with intricate knowledge of our cloud infrastructure. As we are cooperating with law enforcement in an ongoing investigation, we cannot comment further.”

Ubiquiti’s statement largely confirmed the reporting here by not disputing any of the facts raised in the piece. And while it may seem that Ubiquiti is quibbling over whether data was in fact stolen, Adam said Ubiquiti can say there is no evidence that customer information was accessed because Ubiquiti failed to keep logs of who was accessing its databases.

“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” Adam wrote in a whistleblower letter to European privacy regulators last month. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”

It appears investors noticed the incongruity as well. Ubiquiti’s share price hardly blinked at the January breach disclosure. On the contrary, from Jan. 13 to Tuesday’s story its stock had soared from $243 to $370. By the end of trading day Mar. 30, UI had slipped to $349. By close of trading on Thursday (markets were closed Friday) the stock had fallen to $289.

Sam VargheseTime for ABC to bite the bullet and bring Tony Jones back to Q+A

Finally, someone from the mainstream Australian media has called it: Q+A, once one of the more popular shows on the ABC, is really not worth watching any more.

Of course, being Australian, the manner in which this sentiment was expressed was oblique, more so given that it came from a critic who writes for the Nine newspapers, Craig Mathieson.

Hamish Macdonald: his immature approach to Q+A has led to the program going downhill. Courtesy YouTube

A second critical review has appeared on April 5, this time in The Australian.

Newspapers from this company are generally classed as being from the left — they once were, when they were owned by Fairfax Media, but centrist or right of centre would be more accurate these days — and given that the ABC is also considered to be part of the left, criticism was generally absent.

Mathieson did not come right out and call the program atrocious – which is what it is right now. The way the headline on Mathieson’s article put it was that Q+A was once an agenda setter, but was no longer essential viewing. He was right about the former, but to call it essential viewing at any stage of its existence is probably an exaggeration.

He cited viewing figures to bolster his views: “Audience figures for Q+A have plummeted this year. Last week [25 March], it failed to crack the top 20 free-to-air programs on the Thursday night it aired, indicating a capital city audience of just 237,000. In March 2020, the number was above 500,000, and likewise in March 2016,” he wrote.

“This was meant to be the year that Q+A ascended to new prominence. Since its debut in 2008 it had aired about 9.30pm on Mondays, the feisty debate chaser to Four Corners and Media Watch.

“In 2021, it moved to 8.30pm on Thursday, an hour earlier presumably to give it access to a larger audience and its own anchoring role on the ABC’s schedule. But even with Back Roads, one of the national broadcaster’s quiet achievers, as an 8pm lead-in, the viewing figures are starting to resemble a death spiral.”

Veteran ABC journalist Tony Jones was the Q+A host until just two seasons ago. Then Hamish Macdonald, from the tabloid TV channel 10, was given the job. And things have generally gone downhill from that point onwards.

Courtesy The Australian

Jones brought a mature outlook to the show and was generally able to keep the discussion interesting. he always had things in check and the panellists were kept in line when they tried to ramble on. Quite often, the show was prevented from going down a difficult path by a simple “I’ll take that as a comment” from Jones.

Macdonald often loses control of things. He seems to be trying too hard to differentiate himself from Jones, bringing too many angles to a single episode and generally trying to engineer gotcha situations. It turns to be quite juvenile. One word describes him: callow. It is one that can be applied to many of the ABC’s recent recruits.

Had the previous host been anyone but Jones, the difference would not have been so stark. But then even when others like Virginia Trioli or Annabel Crabb stood in for Jones, the show was watchable as nobody tried out gimmicks. Again, Trioli and Crabb are very good at their jobs. The same cannot be said for Macdonald.

Now that Jones has had to put his plan of accompanying his partner, Sarah Ferguson, to China, the ABC might like to think of bringing him back to Q+A. The plan was for to Ferguson to be the ABC’s regular correspondent in China, but that was dropped after the previous correspondent, Bill Birtles, fled the country last September, along with Michael Smith, a correspondent for the Australian Financial Review. Jones had planned to write a book while in China.

The ABC needs to bite the bullet and rescue what was once one of its flagship shows. As Mathieson did, it is worthwhile pointing out that two other popular shows, 7.30 and Four Corners, have held their own during the same period that Q+A has gone downhill, even improving on previous audience numbers.

If change does come it would be at the end of this season. Another season of Macdonald will mean that Q+A may have to be pensioned off like Lateline which was killed largely because the main host, Emma Alberici, had made it into a terrible program. Under Jones, and others like Maxine McKew, Trioli and even the comparatively younger Stephen Cannane, Lateline was always compulsory watching for any Australian who followed news somewhat seriously.

,

Kevin RuddSaturday Paper: A Foreign Policy for the Climate

By Kevin Rudd and Thom Woodroofe

Britain’s Conservative government last month declared the fight against climate change its top diplomatic priority after a comprehensive review of its foreign, defence and security policy. In the United States, Joe Biden has mainstreamed climate change across his own national security apparatus. And the European Union has begun taking steps towards putting climate change at the heart of its trading relationships through the implementation of a carbon border adjustment tax.

These examples show tackling climate change is no longer purely the purview of environmental policy. It has crossed the geopolitical Rubicon and countries are now mainstreaming climate action as part of their foreign policy. It is time for Australia to do the same.

The fight against climate change must become a new pillar of our foreign policy, on a par with our commitment to the US alliance, the Indo–Pacific region and the multilateral order. And the progressive side of politics has an opportunity to lead the way.

Acting on climate change not only makes economic sense for Australia, it makes diplomatic sense as well. Our refusal to act meaningfully on climate change will increasingly be a thorn in the side of our relations with all of the world’s advanced economies.

The entirety of the G7 is now committed to net zero emissions by 2050. Australia will confront an uncomfortable reality as a special guest of the group’s next gathering in June when we find ourselves isolated among the developed countries in the room.

For the first time, China also now has a time line to decarbonise its economy, and more than 70 per cent of Australia’s trade is now with jurisdictions committed to making the same transition.

Closer to home, our refusal to act on climate change will continue to hamstring any effort to genuinely step up our engagement in the Pacific Islands, which are on the front line of this crisis.

Australia is a creative middle power. Both sides of politics have admittedly demonstrated our country’s ability to achieve landmark diplomatic outcomes. Whether it be brokering peace in Cambodia, the formation of APEC or the G20, securing a seat on the United Nations Security Council, or – most recently – Mathias Cormann’s appointment as the new head of the OECD.

When we put our diplomatic minds and might to something, we often succeed. This is no different when it comes to galvanising the world’s efforts to tackle climate change. Despite the common refrain, Australia’s environmental leadership during the past 30 years has often made a difference, including on larger emitters such as China.

Under the Hawke and Keating governments, for example, Australia secured an international ban on mining in Antarctica. We also became one of the first countries to propose a quantifiable emissions reduction target years before this became the norm through the UN Framework Convention and the Kyoto Protocol that followed.

The Howard government may have cynically advocated Kyoto’s inclusion of emissions from agriculture to allow them to be seen to be doing more while actually doing less, and then refused to ratify the agreement. But ironically this position on agriculture has proved pivotal for ensuring the land sector – which represents 20 per cent of global emissions – has not been excluded from global efforts.

Copenhagen is often remembered for what it didn’t deliver. But Australia, as a “friend of the chair”, was essential for what was able to be salvaged and ensuring that from its ashes the Paris Agreement was able to rise. The concept born there – of countries’ climate targets being set individually from the bottom up, rather than from the top down based on our relative contribution or economic capacity – was an Australian idea.

So, too, in part was the concept of a global 2-degree temperature limit being a guardrail for our global efforts, which Australia tabled with the Maldives. The fact that six years later, in Paris, Greg Hunt played a role in then ensuring the calls of island nations to bring this guardrail down to 1.5 degrees were not ignored also deserves credit.

And while Australia was then shut out of progressive groupings, including the High Ambition Coalition, there were others we originally helped form, such as the Cartagena Dialogue for Progressive Action, only to be forced, embarrassingly, to step away during the Abbott era.

Yet today, with Biden’s new climate envoy, John Kerry, openly identifying Australia – alongside Saudi Arabia and Brazil under Jair Bolsonaro – as responsible for the collapse of the most-recent round of UN climate talks in Madrid, it is clear just how much of an international pariah we have become.

Had Labor prevailed at the 2019 election, the world would see us in a very different light today. Instead of refusing to honour the letter and spirit of the Paris Agreement by not increasing the ambition of our existing 2030 target, we would have been the first G20 country to do so ahead of this year’s deadline. We would have re-entered the Green Climate Fund – which until 2018 was led by an Australian, Howard Bamsey – rather than now being the only major Western donor that refuses to take part. And we would have been welcomed as a hero at a critical UN summit in 2019, rather than choosing to instead parade around America’s coal country with Donald Trump.

While the Labor Party may have failed to sell its climate message at the last election, now is a time for courage. This year is not the same as 2019, especially now there is no longer a climate denier in the White House. As the new US president likes to say, when he thinks of climate change, he thinks of jobs. Painting a similar vision of a just transition in Australia, especially for our coal industry, will be key.

But it must come with detail, too. The harsh reality is that the global transition to net zero has tolled the death knell for this industry, and unless we are prepared to embrace the serious conversation about how we diversify our domestic economy and export markets, we will be left naked in the wind.

While Labor might be committed to net zero by 2050, the party cannot afford to simply give the government a free pass on the urgency of short-term action.  Being a party of opposition means being an alternative government.

If the Labor Party was to form government this year, the world’s expectation would be that it brings forward an enhanced 2030 target. Not merely to set its sights on what to do in 2035, which isn’t even up for discussion under the Paris framework for another five years. More than anything else, this will require Labor to better explain the economic benefits of taking stronger action now, rather than being forced to make deeper cuts later.

A report similar to the 2007 Garnaut Climate Change Review – commissioned by the then Labor opposition but focused on the economic benefits of action in the short term – could be the circuit-breaker needed internally within the party on this very question.

Thankfully, when Biden announces his own new 2030 target at an Earth Day Summit this month, it will set a new global benchmark for Australia’s own target. In 2014, Abbott deliberately set Australia’s current target on the basis of what he said the Americans were doing, albeit five years earlier. So by the conservatives’ own rhetoric, if the Americans can do more, so should we.

More importantly, we also clearly have the capacity to do more if we are on track to “meet and beat” our current target, as the government says. Not least because in the seven years since that target was developed, we have also begun to bring online the largest renewable energy project in our history in “Snowy 2.0”.

A more ambitious climate policy will require Labor to continue to take the fight up to the government. But just as the government was dragged kicking and screaming away from its insistence on using dodgy accounting tricks to bolster its efforts, it will likely be dragged kicking and screaming to adopting a net zero by 2050 goal. It must similarly be forced to increase its short-term ambition, too. And as we have seen with countries including Japan and Canada in recent months, the possibility of an about-face on this question is not impossible with Biden now in the White House.

The good news is that if given the chance, Australia’s diplomats are primed to once again make a difference in the global fight against climate change. The merger of both AusAID and parts of the Department of Climate Change into the Department of Foreign Affairs and Trade means they are among the best in the world when it comes to understanding the real world and foreign policy dimensions of climate change. The fact our foreign service also doubles as our trade agency will also be crucial for the new era we have entered.

It is time for Australia to adopt a foreign policy for the climate. We have made a difference before and can do so again. All that is missing is the right political leadership.

Published in The Saturday Paper

The post Saturday Paper: A Foreign Policy for the Climate appeared first on Kevin Rudd.

,

Worse Than FailureError'd: Everybody Has A Testing Environment

“Some people,” said the sage, “are lucky enough to also have a completely separate environment for production.” Today's nuggets of web joy are pudding-proof.

Hypothetically hypochondriac STUDENTS[$RANDOM] gasped “I tried to look up information about Covid tests at the institution. Instead I found…this.”

 

An anonymous gastronome delivered this tasty morsel with a pun too cheesy to permit in this staid column. “It must have got lost in the mail.”

 

Hapless hirer Fred G. wonders “Why aren't we getting any resumes?” ruminating “it worked well enough on HR's machine!”

 

“Wrong.” snapped Scott B. testily.

 

Armchair analyst David accidentally unmasks this editor's archetype, exclaiming “I didn't even know this was one of the types!”

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityNew KrebsOnSecurity Mobile-Friendly Site

Dear Readers, this has been long overdue, but at last I give you a more responsive, mobile-friendly version of KrebsOnSecurity. We tried to keep the visual changes to a minimum and focus on a simple theme that presents information in a straightforward, easy-to-read format. Please bear with us over the next few days as we hunt down the gremlins in the gears.

We were shooting for responsive (fast) and uncluttered. Hopefully, we achieved that and this new design will render well in whatever device you use to view it. If something looks amiss, please don’t hesitate to drop a note in the comments below.

NB: KrebsOnSecurity has not changed any of its advertising practices: The handful of ads we run are still image-only creatives that are vetted by me and served in-house. If you’re blocking ads on this site, please consider adding an exception here. Thank you!

MECensoring Images

A client asked me to develop a system for “censoring” images from an automatic camera. The situation is that we have a camera taking regular photos from a fixed location which includes part of someone else’s property. So my client made a JPEG with some black rectangles in the sections that need to be covered. The first thing I needed to do was convert the JPEG to a PNG with transparency for the sections that aren’t to be covered.

To convert it I loaded the JPEG in the GIMP and went to the Layer->Transparency->Add Alpha Channel menu to enabled the Alpha channel. Then I selected the “Bucket Fill tool” and used “Mode Erase” and “Fill by Composite” and then clicked on the background (the part of the JPEG that was white) to make it transparent. Then I exported it to PNG.

If anyone knows of an easy way to convert the file then please let me know. It would be nice if there was a command-line program I could run to convert a specified color (default white) to transparent. I say this because I can imagine my client going through a dozen iterations of an overlay file that doesn’t quite fit.

To censor the image I ran the “composite” command from imagemagick. The command I used was “composite -gravity center overlay.png in.jpg out.jpg“. If anyone knows a better way of doing this then please let me know.

The platform I’m using is a ARM926EJ-S rev 5 (v5l) which takes 8 minutes of CPU time to convert a single JPEG at full DSLR resolution (4 megapixel). It also required enabling swap on a SD card to avoid running out of RAM and running “systemctl disable tmp.mount” to stop using tmpfs for /tmp as the system only has 256M of RAM.

Worse Than FailureAnnouncing the launch of TFTs

Totally Fungible Tokens

NFTs, or non-fungible tokens, are an exciting new application of Blockchain technology that allows us to burn down a rainforest every time we want to trade a string representing an artist's signature on a creative work.

Many folks are eagerly turning JPGs, text files, and even Tweets into NFTs, but since not all of us have a convenient rainforest to destroy, The Daily WTF is happy to offer at alternative, the Totally Fungible Token

What Is a Totally Fungible Token?

A TFT is a unique identifier which we can generate for any file or group of files. It combines the actual data in the file(s) with a Universally Unique Identifier, and then condenses that data using a SHA-256 hashing algorithm. This guarantees that you have a unique token which represents that you have created a unique token for that data.

How is this better than an NFT?

There are a few key advantages that TFTs offer. First, they're computationally very cheap to make, allowing even a relatively underpowered computer participate actively in the token ecosystem.

In addition, this breaks all dependencies on the blockchain, meaning that you don't need to use or spend cryptocurrency to create, purchase, or trade these tokens.

Most important: much like NFTs, a TFT is absolutely worthless, but we're not promoting these as some sort of arcane investment instrument, so there won't be any sort of bubble. The value of your TFT will remain essentially zero, for the entire life of your TFT. There is no volatility.

In the interests of efficiency, this also performs terribly on large files. How big is too big? That depends on your browser! Enjoy finding out what's too big to encode!

Generate a TFT

Use the button below to browse for a file on your computer, and this will generate a unique token showing that you generated a unique token. Feel free to share, sell, or trade these tokens with your friends! No information about your files is in the token, so it's guaranteed to be completely meaningless! Give it away, sell it, just write it down on a napkin, your TFT is yours to use as you please!

Token: