Planet Russell

,

CryptogramSurvey of Supply Chain Attacks

The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

Key trends from their summary:

  1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.

  2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

  3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

  4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

  5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple's App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm's Android attack, and XcodeGhost.

Recommendations included in the report. The entirely open and freely available dataset is here.

Worse Than FailureCodeSOD: Underscoring the Comma

Andrea writes to confess some sins, though I'm not sure who the real sinner is. To understand the sins, we have to talk a little bit about C/C++ macros.

Andrea was working on some software to control a dot-matrix display from an embedded device. Send an array of bytes to it, and the correct bits on the display light up. Now, if you're building something like this, you want an easy way to "remember" the proper sequences. So you might want to do something like:

uint8_t glyph0[] = {'0', 0x0E, 0x11, 0x0E, 0};
uint8_t glyph1[] = {'1', 0x09, 0x1F, 0x01, 0};

And so on. And heck, you might want to go so far as to have a lookup array, so you might have a const uint8_t *const glyphs[] = {glyph0, glyph1…}. Now, you could just hardcode those definitions, but wouldn't it be cool to use macros to automate that a bit, as your definitions might change?

Andrea went with a style known as X macros, which let you specify one pattern of data which can be re-used by redefining X. So, for example, I could do something like:

#define MY_ITEMS \
  X(a, 5) \
  X(b, 6) \
  X(c, 7)
  
#define X(name, value) int name = value;
MY_ITEMS
#undef X

This would generate:

int a = 5;
int b = 6;
int c = 7;

But I could re-use this, later:

#define X(name, data) name, 
int items[] = { MY_ITEMS nullptr};
#undef X

This would generate, in theory, something like: int items[] = {a,b,c,nullptr};

We are recycling the MY_ITEMS macro, and we're changing its behavior by altering the X macro that it invokes. This can, in practice, result in much more readable and maintainable code, especially code where you need to have parallel lists of items. It's also one of those things that the first time you see it, it's… surprising.

Now, this is all great, and it means that Andrea could potentially have a nice little macro system for defining arrays of bytes and a lookup array pointing to those arrays. There's just one problem.

Specifically, if you tried to write a macro like this:

#define GLYPH_DEFS \
  X(glyph0, {'0', 0x0E, 0x11, 0x0E, 0})

It wouldn't work. It doesn't matter what you actually define X to do, the preprocessor isn't aware of the C/C++ syntax. So it doesn't say "oh, that second comma is inside of an array initalizer, I'll ignore it", it says, "Oh, they're trying to pass more than two parameters to the macro X."

So, you need some way to define an array initializer that doesn't use commas. If macros got you into this situation, macros can get you right back out. Here is Andrea's solution:

#define _ ,  // Sorry.
#define GLYPH_DEFS \
	X(glyph0, { '0' _ 0x0E _ 0x11 _ 0x0E _ 0 } ) \
	X(glyph1, { '1' _ 0x09 _ 0x1F _ 0x01 _ 0 }) \
	X(glyph2, { '2' _ 0x13 _ 0x15 _ 0x09 _ 0 }) \
	X(glyph3, { '3' _ 0x15 _ 0x15 _ 0x0A _ 0 }) \
	X(glyph4, { '4' _ 0x18 _ 0x04 _ 0x1F _ 0 }) \
	X(glyph5, { '5' _ 0x1D _ 0x15 _ 0x12 _ 0 }) \
	X(glyph6, { '6' _ 0x0E _ 0x15 _ 0x03 _ 0 }) \
	X(glyph7, { '7' _ 0x10 _ 0x13 _ 0x0C _ 0 }) \
	X(glyph8, { '8' _ 0x0A _ 0x15 _ 0x0A _ 0 }) \
	X(glyph9, { '9' _ 0x08 _ 0x14 _ 0x0F _ 0 }) \
	X(glyphA, { 'A' _ 0x0F _ 0x14 _ 0x0F _ 0 }) \
	X(glyphB, { 'B' _ 0x1F _ 0x15 _ 0x0A _ 0 }) \
	X(glyphC, { 'C' _ 0x0E _ 0x11 _ 0x11 _ 0 }) \
	X(glyphD, { 'D' _ 0x1F _ 0x11 _ 0x0E _ 0 }) \
	X(glyphE, { 'E' _ 0x1F _ 0x15 _ 0x15 _ 0 }) \
	X(glyphF, { 'F' _ 0x1F _ 0x14 _ 0x14 _ 0 }) \

#define X(name, data) const uint8_t name [] = data ;
GLYPH_DEFS
#undef X

#define X(name, data) name _
const uint8_t *const glyphs[] = { GLYPH_DEFS nullptr };
#undef X
#undef _

So, when processing the X macro, we pass it a pile of _s, which aren't commas, so it doesn't complain. Then we expand the _ macro and voila: we have syntactically valid array initalizers. If Andrea ever changes the list of glyphs, adding or removing any, the macro will automatically sync the declaration of the individual arrays and their pointers over in the glyphs array.

Andrea adds:

The scope of this definition is limited to this data structure, in which the X macros are used, and it is #undef'd just after that. However, with all the stories of #define abuse on this site, I feel I still need to atone.
The testing sketch works perfectly.

Honestly, all sins are forgiven. There isn't a true WTF here, beyond "the C preprocessor is TRWTF". It's a weird, clever hack, and it's interesting to see this technique in use.

That said, as you might note: this was a testing sketch, just to prove a concept. Instead of getting clever with macros, your disposable testing code should probably just get to proving your concept as quickly as possible. You can worry about code maintainability later. So, if there are any sins by Andrea, it's the sin of overengineering a disposable test program.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: The City in the Middle of the Night

Review: The City in the Middle of the Night, by Charlie Jane Anders

Publisher: Tor
Copyright: February 2019
Printing: February 2020
ISBN: 1-4668-7113-X
Format: Kindle
Pages: 366

January is a tidally-locked planet divided between permanent night and permanent day, an unfortunate destination for a colony starship. Now, humans cling to a precarious existence along the terminator, huddling in two wildly different cities and a handful of smaller settlements, connected by a road through the treacherous cold.

The novel opens with Sophie, a shy university student from the dark side of the city of Xiosphant. She has an overwhelming crush on Bianca, her high-class, self-confident roommate and one of the few people in her life to have ever treated her with compassion and attention. That crush, and her almost non-existent self-esteem, lead her to take the blame for Bianca's petty theft, resulting in what should have been a death sentence. Sophie survives only because she makes first contact with a native intelligent species of January, one that the humans have been hunting for food and sport.

Sadly, I think this is enough Anders for me. I've now bounced off two of her novels, both for structural reasons that I think go deeper than execution and indicate a fundamental mismatch between what Anders wants to do as an author and what I'm looking for as a reader.

I'll talk more about what this book is doing in a moment, but I have to start with Bianca and Sophie. It's difficult for me to express how much I loathed this relationship and how little I wanted to read about it. It took me about five pages to peg Bianca as a malignant narcissist and Sophie's all-consuming crush as dangerous codependency. It took the entire book for Sophie to figure out how awful Bianca is to her, during which Bianca goes through the entire abusive partner playbook of gaslighting, trivializing, contingent affection, jealous rage, and controlling behavior. And meanwhile Sophie goes back to her again, and again, and again, and again. If I hadn't been reading this book on a Kindle, I think it would have physically hit a wall after their conversation in the junkyard.

This is truly a matter of personal taste and preference. This is not an unrealistic relationship; this dynamic happens in life all too often. I'm sure there is someone for whom reading about Sophie's spectacularly poor choices is affirming or cathartic. I've not personally experienced this sort of relationship, which doubtless matters.

But having empathy for someone who is making awful and self-destructive life decisions and trusting someone they should not be trusting and who is awful to them in every way is difficult work. Sophie is the victim of Bianca's abuse, but she does so many stupid and ill-conceived things in support of this twisted relationship that I found it very difficult to not get angry at her. Meanwhile, Anders writes Sophie as so clearly fragile and uncertain and devoid of a support network that getting angry at her is like kicking a puppy. The result for me was spending nearly an entire book in a deeply unpleasant state of emotional dissonance. I may be willing to go through that for a close friend, but in a work of fiction it's draining and awful and entirely not fun.

The other viewpoint character had the opposite problem for me. Mouth starts the book as a traveling smuggler, the sole survivor of a group of religious travelers called the Citizens. She's practical, tough, and guarded. Beneath that, I think the intent was to show her as struggling to come to terms with the loss of her family and faith community. Her first goal in the book is to recover a recording of Citizen sacred scripture to preserve it and to reconnect with her past.

This sounds interesting on the surface, but none of it gelled. Mouth never felt to me like someone from a faith community. She doesn't act on Citizen beliefs to any meaningful extent, she rarely talks about them, and when she does, her attitude is nostalgia without spirituality. When Mouth isn't pursuing goals that turn out to be meaningless, she aimlessly meandered through the story. Sophie at least has agency and makes some important and meaningful decisions. Mouth is just there, even when Anders does shattering things to her understanding of her past.

Between Sophie and Bianca putting my shoulders up around my ears within the first few pages of the first chapter and failing to muster any enthusiasm for Mouth, I said the eight deadly words ("I don't care what happens to these people") about a hundred pages in and the book never recovered.

There are parts of the world-building I did enjoy. The alien species that Sophie bonds with is not stunningly original, but it's a good (and detailed) take on one of the alternate cognitive and social models that science fiction has dreamed up. I was comparing the strangeness and dislocation unfavorably to China Miéville's Embassytown while I was reading it, but in retrospect Anders's treatment is more decolonialized. Xiosphant's turn to Circadianism as their manifestation of order is a nicely understated touch, a believable political overreaction to the lack of a day/night cycle. That touch is significantly enhanced by Sophie's time working in a salon whose business model is to help Xiosphant residents temporarily forget about time. And what glimmers we got of politics on the colony ship and their echoing influence on social and political structures were intriguing.

Even with the world-building, though, I want the author to be interested in and willing to expand the same bits of world-building that I'm engaged with. Anders didn't seem to be. The reader gets two contrasting cities along a road, one authoritarian and one libertine, which makes concrete a metaphor for single-axis political classification. But then Anders does almost nothing with that setup; it's just the backdrop of petty warlord politics, and none of the political activism of Bianca's student group seems to have relevance or theoretical depth. It's a similar shallowness as the religion of Mouth's Citizens: We get a few fragments of culture and religion, but without narrative exploration and without engagement from any of the characters. The way the crew of the Mothership was assembled seems to have led to a factional and racial caste system based on city of origin and technical expertise, but I couldn't tell you more than that because few of the characters seem to care. And so on.

In short, the world-building that I wanted to add up to a coherent universe that was meaningful to the characters and to the plot seemed to be little more than window-dressing. Anders tosses in neat ideas, but they don't add up to anything. They're just background scenery for Bianca and Sophie's drama.

The one thing that The City in the Middle of the Night does well is Sophie's nervous but excited embrace of the unknown. It was delightful to see the places where a typical protagonist would have to overcome a horror reaction or talk themselves through tradeoffs and where Sophie's reaction was instead "yes, of course, let's try." It provided an emotional strength to an extended first-contact exploration scene that made it liberating and heart-warming without losing the alienness. During that part of the book (in which, not coincidentally, Bianca does not appear), I was able to let my guard down and like Sophie for the first time, and I suspect that was intentional on Anders's part.

But, overall, I think the conflict between Anders's story-telling approach and my preferences as a reader are mostly irreconcilable. She likes to write about people who make bad decisions and compound their own problems. In one of the chapters of her non-fiction book about writing that's being serialized on Tor.com she says "when we watch someone do something unforgivable, we're primed to root for them as they search desperately for an impossible forgiveness." This is absolutely not true for me; when I watch a character do something unforgivable, I want to see repudiation from the protagonists and ideally some clear consequences. When that doesn't happen, I want to stop reading about them and find something more enjoyable to do with my time. I certainly don't want to watch a viewpoint character insist that the person who is doing unforgivable things is the center of her life.

If your preferences on character and story arc are closer to Anders's than mine, you may like this book. Certainly lots of people did; it was nominated for multiple awards and won the Locus Award for Best Science Fiction Novel. But despite the things it did well, I had a truly miserable time reading it and am not anxious to repeat the experience.

Rating: 4 out of 10

,

Krebs on SecurityBusiness ID Theft Soars Amid COVID Closures

Identity thieves who specialize in running up unauthorized lines of credit in the names of small businesses are having a field day with all of the closures and economic uncertainty wrought by the COVID-19 pandemic, KrebsOnSecurity has learned. This story is about the victims of a particularly aggressive business ID theft ring that’s spent years targeting small businesses across the country and is now pivoting toward using that access for pandemic assistance loans and unemployment benefits.

Most consumers are likely aware of the threat from identity theft, which occurs when crooks apply for new lines of credit in your name. But the same crime can be far more costly and damaging when thieves target small businesses. Unfortunately, far too many entrepreneurs are simply unaware of the threat or don’t know how to be watchful for it.

What’s more, with so many small enterprises going out of business or sitting dormant during the COVID-19 pandemic, organized fraud rings have an unusually rich pool of targets to choose from.

Short Hills, N.J.-based Dun & Bradstreet [NYSE:DNB] is a data analytics company that acts as a kind of de facto credit bureau for companies: When a business owner wants to open a new line of credit, creditors typically check with Dun & Bradstreet to gauge the business’s history and trustworthiness.

In 2019, Dun & Bradstreet saw more than a 100 percent increase in business identity theft. For 2020, the company estimates an overall 258 percent spike in the crime. Dun & Bradstreet said that so far this year it has received over 4,700 tips and leads where business identity theft or malfeasance are suspected.

“The ferocity of cyber criminals to take advantage of COVID-19 uncertainties by preying on small businesses is disturbing,” said Andrew LaMarca, who leads the global high-risk and fraud team at Dun & Bradstreet.

For the past several months, Milwaukee, Wisc. based cyber intelligence firm Hold Security has been monitoring the communications between and among a businesses ID theft gang apparently operating in Georgia and Florida but targeting businesses throughout the United States. That surveillance has helped to paint a detailed picture of how business ID thieves operate, as well as the tricks they use to gain credit in a company’s name.

Hold Security founder Alex Holden said the group appears to target both active and dormant or inactive small businesses. The gang typically will start by looking up the business ownership records at the Secretary of State website that corresponds to the company’s state of incorporation. From there, they identify the officers and owners of the company, acquire their Social Security and Tax ID numbers from the dark web and other sources online.

To prove ownership over the hijacked firms, they hire low-wage image editors online to help fabricate and/or modify a number of official documents tied to the business — including tax records and utility bills.

The scammers frequently then file phony documents with the Secretary of State’s office in the name(s) of the business owners, but include a mailing address that they control. They also create email addresses and domain names that mimic the names of the owners and the company to make future credit applications appear more legitimate, and submit the listings to business search websites, such as yellowpages.com.

For both dormant and existing businesses, the fraudsters attempt to create or modify the target company’s accounts at Dun & Bradstreet. In some cases, the scammers create dashboard accounts in the business’s names at Dun & Bradstreet’s credit builder portal; in others, the bad guys have actually hacked existing business accounts at DNB, requesting a new DUNS numbers for the business (a DUNS number is a unique, nine-digit identifier for businesses).

Finally, after the bogus profiles are approved by Dun & Bradstreet, the gang waits a few weeks or months and then starts applying for new lines of credit in the target business’s name at stores like Home Depot, Office Depot and Staples. Then they go on a buying spree with the cards issued by those stores.

Usually, the first indication a victim has that they’ve been targeted is when the debt collection companies start calling.

“They are using mostly small companies that are still active businesses but currently not operating because of COVID-19,” Holden said. “With this gang, we see four or five people working together. The team leader manages the work between people. One person seems to be in charge of getting stolen cards from the dark web to pay for the reactivation of businesses through the secretary of state sites. Another team member works on revising the business documents and registering them on various sites. The others are busy looking for specific businesses they want to revive.”

Holden said the gang appears to find success in getting new lines of credit with about 20 percent of the businesses they target.

“One’s personal credit is nothing compared to the ability of corporations to borrow money,” he said. “That’s bad because while the credit system may be flawed for individuals, it’s an even worse situation on average when we’re talking about businesses.”

Holden said over the past few months his firm has seen communications between the gang’s members indicating they have temporarily shifted more of their energy and resources to defrauding states and the federal government by filing unemployment insurance claims and apply for pandemic assistance loans with the Small Business Administration.

“It makes sense, because they’ve already got control over all these dormant businesses,” he said. “So they’re now busy trying to get unemployment payments and SBA loans in the names of these companies and their employees.”

PHANTOM OFFICES

Hold Security shared data intercepted from the gang that listed the personal and financial details of dozens of companies targeted for ID theft, including Dun & Bradstreet logins the crooks had created for the hijacked businesses. Dun & Bradstreet declined to comment on the matter, other than to say it was working with federal and state authorities to alert affected businesses and state regulators.

Among those targeted was Environmental Safety Consultants Inc. (ESC), a 37-year-old environmental engineering firm based in Bradenton, Fla. ESC owner Scott Russell estimates his company was initially targeted nearly two years ago, and that he first became aware something wasn’t right when he recently began getting calls from Home Depot’s corporate offices inquiring about the company’s delinquent account.

But Russell said he didn’t quite grasp the enormity of the situation until last year, when he was contacted by the manager of a virtual office space across town who told him about a suspiciously large number of deliveries at an office space that was rented out in his name.

Russell had never rented that particular office. Rather, the thieves had done it for him, using his name and the name of his business. The office manager said the deliveries came virtually non-stop, even though there was apparently no business operating within the rented premises. And in each case, shortly after the shipments arrived someone would show up and cart them away.

“She said we don’t think it’s you,” he recalled. “Turns out, they had paid for a lease in my name with someone else’s credit card. She shared with me a copy of the lease, which included a fraudulent ID and even a vehicle insurance card for a Land Cruiser we got rid of like 15 years ago. The application listed our home address with me and some woman who was not my wife’s name.”

The crates and boxes being delivered to his erstwhile office space were mostly computers and other high-priced items ordered from 10 different Office Depot credit cards that also were not in his name.

“The total value of the electronic equipment that was bought and delivered there was something like $75,000,” Russell said, noting that it took countless hours and phone calls with Office Depot to make it clear they would no longer accept shipments addressed to him or his company. “It was quite spine-tingling to see someone penned a lease in the name of my business and personal identity.”

Even though the virtual office manager had the presence of mind to take photocopies of the driver’s licenses presented by the people arriving to pick up the fraudulent shipments, the local police seemed largely uninterested in pursuing the case, Russell said.

“I went to the local county sheriff’s office and showed them all the documentation I had and the guy just yawned and said he’d get right on it,” he recalled. “The place where the office space was rented was in another county, and the detective I spoke to there about it was interested, but he could never get anyone from my county to follow up.”

RECYCLING VICTIMS

Russell said he believes the fraudsters initially took out new lines of credit in his company’s name and then used those to defraud others in a similar way. One of those victims is another victim on the gang’s target list obtained by Hold Security — Mary McMahan, owner of Fan Experiences, an event management company in Winter Park, Fla.

McMahan also had stolen goods from Office Depot and other stores purchased in her company’s name and delivered to the same office space rented in Russell’s name. McMahan said she and her businesses have suffered hundreds of thousands of dollars in fraud, and spent nearly as much in legal fees fending off collections firms and restoring her company’s credit.

McMahan said she first began noticing trouble almost four years ago, when someone started taking out new credit cards in her company’s name. At the same time, her business was used to open a new lease on a virtual office space in Florida that also began receiving packages tied to other companies victimized by business ID theft.

“About four years back, they hit my credit hard for a year, getting all these new lines of credit at Home Depot, Office Depot, Office Max, you name it,” she said. “Then they came back again two years ago and hit it hard for another year. They even went to the [Florida Department of Motor Vehicles] to get a driver’s license in my name.”

McMahan said the thieves somehow hacked her DNB account, and then began adding new officers and locations for her business listing.

“They changed the email and mailing address, and even went on Yelp and Google and did the same,” she said.

McMahan said she’s since locked down her personal and business credit to the point where even she would have a tough time getting a new line of credit or mortgage if she tried.

“There’s no way they can even utilize me anymore because there’s so many marks on my credit stating that it’s been stolen” she said. “These guys are relentless, and they recycle victims to defraud others until they figure out they can’t recycle them anymore.”

SAY…THAT’S A NICE CREDIT PROFILE YOU GOT THERE…

McMahan says she, too, has filed multiple reports about the crimes with local police, but has so far seen little evidence that anyone is interested in following up on the matter. For now, she is paying Dun and Bradstreet more than a $100 a month to monitor her business credit profile.

Dun & Bradstreet does offer a free version of credit monitoring called Credit Signal that lets business owners check their business credit scores and any inquiries made in the previous 14 days up to four times a year. However, those looking for more frequent checks or additional information about specific credit inquiries beyond 14 days are steered toward DNB’s subscription-based services.

Eva Velasquez, president of the Identity Theft Resource Center, a California-based nonprofit that assists ID theft victims, said she finds that troubling.

“When we look at these institutions that are necessary for us to operate and function in society and they start to charge us a fee for a service to fix a problem they helped create through their infrastructure, that’s just unconscionable,” Velasquez said. “We need to take a hard look at the infrastructures that businesses are beholden to and make sure the risk minimization protections they’re entitled to are not fee-based — particularly if it’s a problem created by the very infrastructure of the system.”

Velasquez said it’s unfortunate that small business owners don’t have the same protections afforded to consumers. For example, only recently did the three major consumer reporting bureaus allow all U.S. residents to place a freeze on their credit files for free.

“We’ve done a good job in educating the public that anyone can be victim of identity theft, and in compelling our infrastructure to provide robust consumer protection and risk minimization processes that are more uniform,” she said. “It’s still not good by any means, but it’s definitely better for consumers than it is for businesses. We currently put all the responsibility on the small business owner, and very little on the infrastructure and processes that should be designed to protect them but aren’t doing a great job, frankly.”

Rather, the onus continues to be on the business owner to periodically check with DNB and state agencies to monitor for any signs of unauthorized changes. Worse still, too many private and public organizations still don’t do a good enough job protecting employee identification and tax ID numbers that are so often abused in business identity theft, Velasquez said.

“You can put alerts and other protections in place but the problem is you have to go on a department by department and case by case basis,” she said. “The place to begin is your secretary of state’s office or wherever you file your documents to operate your business.

For its part, Dun & Bradstreet recently published a blog post outlining recommendations for businesses to ward off identity thieves. DNB says anyone who suspects fraudulent activity on their account should contact its support team.

Planet DebianMatthew Garrett: Filesystem deduplication is a sidechannel

First off - nothing I'm going to talk about in this post is novel or overly surprising, I just haven't found a clear writeup of it before. I'm not criticising any design decisions or claiming this is an important issue, just raising something that people might otherwise be unaware of.

With that out of the way: Automatic deduplication of data is a feature of modern filesystems like zfs and btrfs. It takes two forms - inline, where the filesystem detects that data being written to disk is identical to data that already exists on disk and simply references the existing copy rather than, and offline, where tooling retroactively identifies duplicated data and removes the duplicate copies (zfs supports inline deduplication, btrfs only currently supports offline). In a world where disks end up with multiple copies of cloud or container images, deduplication can free up significant amounts of disk space.

What's the security implication? The problem is that deduplication doesn't recognise ownership - if two users have copies of the same file, only one copy of the file will be stored[1]. So, if user a stores a file, the amount of free space will decrease. If user b stores another copy of the same file, the amount of free space will remain the same. If user b is able to check how much free space is available, user b can determine whether the file already exists.

This doesn't seem like a huge deal in most cases, but it is a violation of expected behaviour (if user b doesn't have permission to read user a's files, user b shouldn't be able to determine whether user a has a specific file). But we can come up with some convoluted cases where it becomes more relevant, such as law enforcement gaining unprivileged access to a system and then being able to demonstrate that a specific file already exists on that system. Perhaps more interestingly, it's been demonstrated that free space isn't the only sidechannel exposed by deduplication - deduplication has an impact on access timing, and can be used to infer the existence of data across virtual machine boundaries.

As I said, this is almost certainly not something that matters in most real world scenarios. But with so much discussion of CPU sidechannels over the past couple of years, it's interesting to think about what other features also end up leaking information in ways that may not be obvious.

(Edit to add: deduplication isn't enabled on zfs by default and is explicitly triggered on btrfs, so unless it's something you've enabled then this isn't something that affects you)

[1] Deduplication is usually done at the block level rather than the file level, but given zfs's support for variable sized blocks, identical files should be deduplicated even if they're smaller than the maximum record size

comment count unavailable comments

Planet DebianWouter Verhelst: giphy.gif

Planet DebianWouter Verhelst: On Statements, Facts, Hypotheses, Science, Religion, and Opinions

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/

Anyway...

mic drop

CryptogramImages in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 11)

Here’s part eleven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowDiscovery in Mexican Cave May Drastically Change the Known Timeline of Humans’ Arrival to the Americas

Human history in the Americas may be twice long as long as previously believed — at least 26,500 years — according to authors of a new study at Mexico’s Chiquihuite cave and other sites throughout Central Mexico.

According to the study’s lead author Ciprian Ardelean:

“This site alone can’t be considered a definitive conclusion. But with other sites in North America like Gault (Texas), Bluefish Caves (Yukon), maybe Cactus Hill (Virginia)—it’s strong enough to favor a valid hypothesis that there were humans here probably before and almost surely during the Last Glacial Maximum.”

Planet DebianMartin Michlmayr: ledger2beancount 2.4 released

I released version 2.4 of ledger2beancount, a ledger to beancount converter.

There are two notable changes in this release:

  1. I fixed two regressions introduced in the last release. Sorry about the breakage!
  2. I improved support for hledger. I believe all syntax differences in hledger are supported now.

Here are the changes in 2.4:

  • Fix regressions introduced in version 2.3
    • Handle price directives with comments
    • Don't assume implicit conversion when price is on second posting
  • Improve support for hledger
    • Fix parsing of hledger tags
    • Support commas as decimal markers
    • Support digit group marks through commodity and D directives
    • Support end aliases directive
    • Support regex aliases
    • Recognise total balance assertions
    • Recognise sub-account balance assertions
  • Add support for define directive
  • Convert all uppercase metadata tags to all lowercase
  • Improve handling of ledger lots without cost
  • Allow transactions without postings
  • Fix parsing issue in commodity declarations
  • Support commodities that contain quotation marks
  • Add --version option to show version
  • Document problem of mixing apply and include

Thanks to Kirill Goncharov for pointing out one regressions, to Taylor R Campbell for for a patch, to Stefano Zacchiroli for some input, and finally to Simon Michael for input on hledger!

You can get ledger2beancount from GitHub

Planet DebianSteve Kemp: Growing food is fun.

"I grew up on a farm" is something I sometimes what I tell people. It isn't true, but it is a useful shorthand. What is true is that my parents both come from a farming background, my father's family up in Scotland, my mother's down in Yorkshire.

Every summer my sisters and myself would have a traditional holiday at the seaside, which is what people do in the UK (Blackpool, Scarborough, Great Yarmouth, etc). Before, or after, that we'd spend the rest of the summer living on my grandmother's farm.

I loved spending time on the farm when I was a kid, and some of my earliest memories date from that time. For example I remember hand-feeding carrots to working dogs (alsatians) that were taller than I was. I remember trying to ride on the backs of those dogs, and how that didn't end well. In fact the one and only time I can recall my grandmother shouting at me, or raising her voice at all, was when my sisters and I spent an afternoon playing in the coal-shed. We were filthy and covered in coal-dust from head to toe. Awesome!

Anyway the only reason I bring this up is because I have a little bit of a farming background, largely irrelevant in my daily life, but also a source of pleasant memories. Despite it being an animal farm (pigs, sheep, cows) there was also a lot of home-grown food, which my uncle Albert would deliver/sell to people nearby out of the back of a van. That same van that would be used to ferry us to see the fireworks every November. Those evenings were very memorable too - they would almost always involve flasks of home-made vegetable soup.

Nowadays I live in Finland, and earlier in the year we received access to an allotment - a small piece of land (10m x 10m) for €50/year - upon which we can grow our own plants, etc.

My wife decided to plant flowers and make it look pretty. She did good.

I decided to plant "food". I might not have done this stuff from scratch before, but I was pretty familiar with the process from my youth, and also having the internet to hand to make the obvious searches such as "How do you know when you can harvest your garlic?"

Before I started I figured it couldn't be too hard, after all if you leave onions/potatoes in the refrigerator for long enough they start to grow! It isn't like you have to do too much to help them. In short it has been pretty easy and I'm definitely going to be doing more of it next year.

I've surprised myself by enjoying the process as much as I have. Every few days I go and rip up the weeds, and water the things we've planted. So far I've planted, and harvested, Radish, Garlic, Onions, and in a few more weeks I'll be digging up potatoes.

I have no particular point to this post, except to say that if you have a few hours spare a week, and a slab of land to hand upon which you can dig and plant I'd recommend it. Sure there were annoyances, and not a single one of the carrot-seeds I planted showed any sign of life, but the other stuff? The stuff that grew? Very tasty, om nom nom ..

(It has to be said that when we received the plot there was a jungle growing upon it. Once we tidied it all up we found raspberries, roses, and other things. The garlic I reaped was already growing so I felt like a cheat to harvest it. That said I did plant a couple of bulbs on my balcony so I could say "I grew this from scratch". Took a while, but I did indeed harvest my own garlic.)

Worse Than FailureUltrabase

After a few transfers across departments at IniTech, Lydia found herself as a senior developer on an internal web team. They built intranet applications which covered everything from home-grown HR tools to home-grown supply chain tools, to home-grown CMSes, to home-grown "we really should have purchased something but the approval process is so onerous and the budgeting is so constrained that it looks cheaper to carry an IT team despite actually being much more expensive".

A new feature request came in, and it seemed extremely easy. There was a stored procedure that was normally invoked by a scheduled job. The admin users in one of the applications wanted to be able to invoke it on demand. Now, Lydia might be "senior", but she was new to the team, so she popped over to Desmond's cube to see what he thought.

"Oh, sure, we can do that, but it'll take about a week."

"A week?" Lydia asked. "A week? To add a button that invokes a stored procedure. It doesn't even take any parameters or return any results you'd need to display."

"Well, roughly 40 hours of effort, yeah. I can't promise it'd be a calendar week."

"I guess, with testing, and approvals, I could see it taking that long," Lydia said.

"Oh, no, that's just development time," Desmond said. "You're new to the team, so it's time you learned about Ultrabase."

Wyatt was the team lead. Lydia had met him briefly during her onboarding with the team, but had mostly been interacting with the other developers on the team. Wyatt, as it turned out, was a Certified Super Genius™, and was so smart that he recognized that most of their applications were, functionally, quite the same. CRUD apps, mostly. So Wyatt had "automated" the process, with his Ultrabase solution.

First, there was a configuration database. Every table, every stored procedure, every view or query, needed to be entered into the configuration database. Now, Wyatt, Certified Super Genius™, knew that he couldn't define a simple schema which would cover all the possible cases, so he didn't. He defined a fiendishly complicated schema with opaque and inconsistent validity rules. Once you had entered the data for all of your database objects, hopefully correctly, you could then execute the Data program.

The Data program would read through the configuration database, and through the glories of string concatenation generate a C# solution containing the definitions of your data model objects. The Data program itself was very fault tolerant, so fault tolerant that if anything went wrong, it still just output C# code, just not syntactically correct C# code. If the C# code couldn't compile, you needed to go back to the configuration database and figure out what was wrong.

Eventually, once you had a theoretically working data model library, you pushed the solution to the build server. That would build and sign the library with a corporate key, and publish it to their official internal software repository. This could take days or weeks to snake its way through all the various approval steps.

Once you had the official release of the datamodel, you could fire up the Data Access Layer tool, which would then pull down the signed version in the repository, and using reflection and the config database, the Data Access Layer program would generate a DAL. Assuming everything worked, you would push that to the build server, and then wait for that to wind its way through the plumbing of approvals.

Then the Business Logic Layer. Then the "Core" layer. The "UI Adapter Layer". The "Front End" layer.

Each layer required the previous layer to be in the corporate repository before you could generate it. Each layer also needed to check the config database. It was trivial to make an error that wouldn't be discovered until you tried to generate the front end layer, and if that happened, you needed to go all the way back to the beginning.

"Wyatt is working on a 'config validation tool' which he says will avoid some of these errors," Desmond said. "So we've got that to look forward to. Anyway, that's our process. Glad to have you on the team!"

Lydia was significantly less glad to be on the team, now that Desmond had given her a clearer picture of how it actually worked.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRuss Allbery: Summer haul

I'm buying rather too many books at the moment and not reading enough of them (in part because I got back into Minecraft and in part because I got a bit stuck on a few difficult books). I think I've managed to get myself unstuck again, though, and have started catching up on reviews.

2020. It's kind of a lot. And I'm not even that heavily affected.

Katherine Addison — The Angel of the Crows (sff)
Marie Brennan — A Natural History of Dragons (sff)
Kacen Callender — Queen of the Conquered (sff)
Jo Clayton — Diadem from the Stars (sff)
Jo Clayton — Lamarchos (sff)
Jo Clayton — Irsud (sff)
Clifford D. Conner — The Tragedy of American Science (nonfiction)
Kate Elliott — Unconquerable Sun (sff)
Rory Fanning & Craig Hodges — Long Shot (nonfiction)
Michael Harrington — Socialism: Past & Future (nonfiction)
Nalo Hopkinson — Brown Girl in the Ring (sff)
Kameron Hurley — The Stars Are Legion (sff)
N.K. Jemisin — Emergency Skin (sff)
T. Kingfisher — A Wizard's Guide to Defensive Baking (sff)
T. Kingfisher — Nine Goblins (sff)
Michael Lewis — The Fifth Risk (nonfiction)
Paul McAuley — War of the Maps (sff)
Gretchen McCulloch — Because Internet (nonfiction)
Hayao Miyazaki — Nausicaä of the Valley of the Wind (graphic novel)
Annalee Newitz — The Future of Another Timeline (sff)
Nick Pettigrew — Anti-Social (nonfiction)
Rivers Solomon, et al. — The Deep (sff)
Jo Walton — Or What You Will (sff)
Erik Olin Wright — Stardust to Stardust (nonfiction)

Of these, I've already read and reviewed The Fifth Risk (an excellent book).

Planet DebianRuss Allbery: Review: Rise of the Warrior Cop

Review: Rise of the Warrior Cop, by Radley Balko

Publisher: PublicAffairs
Copyright: 2013
ISBN: 1-61039-212-4
Format: Kindle
Pages: 336

As the United States tries, in fits and starts, to have a meaningful discussion about long-standing police racism, brutality, overreach, corruption, and murder, I've realized that my theoretical understanding of the history of and alternative frameworks for law enforcement is woefully lacking. Starting with a book by a conservative white guy is not the most ideal of approaches, but it's what I already had on hand, and it won't be the last book I read and review on this topic. (Most of my research so far has been in podcast form. I don't review those here, but I can recommend Ezra Klein's interviews with Ta-Nehisi Coates, Paul Butler, and, most strongly, sujatha baliga.)

Rise of the Warrior Cop is from 2013 and has had several moments of fame, no doubt helped by Balko's connections to the conservative and libertarian right. One of the frustrating facts of US politics is that critiques of the justice system from the right (and from white men) get more media attention than critiques from the left. That said, it's a generally well-respected book on the factual history of the topic, and police brutality and civil rights are among the points on which I have stopped-clock agreements with US libertarians.

This book is very, very libertarian.

In my callow youth, I was an ardent libertarian, so I've read a lot of US libertarian literature. It's a genre with its own conventions that become obvious when you read enough of it, and Rise of the Warrior Cop goes through them like a checklist. Use the Roman Republic (never the Roman Empire) as the starting point for any political discussion, check. Analyze the topic in the context of pre-revolutionary America, check. Spend considerable effort on discerning the opinions of the US founders on the topic since their opinions are always relevant to the modern world, check. Locate some point in the past (preferably before 1960) where the political issue was as good as it has ever been, check. Frame all changes since then as an erosion of rights through government overreach, check. Present your solution as a return to a previous era of respect for civil rights, check. Once you start recognizing the genre conventions, their prevalence in libertarian writing is almost comical.

The framing chapters therefore leave a bit to be desired, but the meat of the book is a useful resource. Starting with the 1970s and its use as a campaigning tool by Nixon, Balko traces a useful history of the war on drugs. And starting with the 1980s, the number of cites to primary sources and the evidence of Balko's own research increases considerably. If you want to know how US police turned into military cosplayers with body armor, heavy weapons, and armored vehicles, this book provides a lot of context and history.

One of the reasons why I view libertarians as allies of convenience on this specific issue is that drug legalization and disgust with the war on drugs have been libertarian issues for decades. Ideologically honest libertarians (and Balko appears to be one) are inherently skeptical of the police, so when the police overreach in an area of libertarian interest, they notice. Balko makes a solid argument, backed up with statistics, specific programs, legislation, and court cases, that the drug war and its accompanying lies about heavily-armed drug dealers and their supposed threat to police officers was the fuel for the growth of SWAT teams, no-knock search warrants, erosion of legal protections for criminal defendants, and de facto license for the police to ignore the scope and sometimes even the existence of warrants.

This book is useful support for the argument that fears for the safety of officers underlying the militarization of police forces are imaginary. One telling point that Balko makes repeatedly and backs with statistical and anecdotal evidence is that the police generally do not use raid tactics on dangerous criminals. On the contrary, aggressive raids are more likely to be used on the least dangerous criminals because they're faster, they're fun for the police (they provide an adrenaline high and let them play with toys), and they're essentially risk-free. If the police believe someone is truly dangerous, they're more likely to use careful surveillance and to conduct a quiet arrest at an unexpected moment. The middle-of-the-night armed break-ins with battering rams, tear gas, and flash-bangs are, tellingly, used against the less dangerous suspects.

This is part of Balko's overall argument that police equipment and tactics have become untethered from any realistic threat and have become cultural. He traces an acceleration of that trend to 9/11 and the resulting obsession with terrorism, which further opened the spigot of military hardware and "special forces" training. This became a point of competition between police departments, with small town forces that had never seen a terrorist and had almost no chance of a terrorist incident demanding their own armored vehicles. I've encountered this bizarre terrorism justification personally; one of the reasons my local police department gave in a public hearing for not having a policy against shooting at moving vehicles was "but what if terrorism?" I don't believe there has ever been a local terrorist attack.

SWAT in such places didn't involve the special training or dedicated personnel of large city forces; instead, it was a part-time duty for normal police officers, and frequently they were encouraged to practice SWAT tactics by using them at random for some otherwise normal arrest or search. Balko argues that those raids were more exciting than normal police work, leading to a flood of volunteers for that duty and a tendency to use them as much as possible. That in turn normalizes disconnecting police tactics from the underlying crime or situational risk.

So far, so good. But despite the information I was able to extract from it, I have mixed feelings about Rise of the Warrior Cop as a whole. At the least, it has substantial limitations.

First, I don't trust the historical survey of policing in this book. Libertarian writing makes for bad history. The constraints of the genre require overusing only a few points of reference, treating every opinion of the US founders as holy writ, and tying forward progress to a return to a previous era, all of which interfere with good analysis. Balko also didn't do the research for the historical survey, as is clear from the footnotes. The citations are all to other people's histories, not to primary sources. He's summarizing other people's histories, and you'll almost certainly get better history by finding well-respected historians who cover the same ground. (That said, if you're not familiar with Peel's policing principles, this is a good introduction.)

Second, and this too is unfortunately predictable in a libertarian treatment, race rarely appears in this book. If Balko published the same book today, I'm sure he would say more about race, but even in 2013 its absence is strange. I was struck while reading by how many examples of excessive police force were raids on west coast pot farms; yes, I'm sure that was traumatic, but it's not the demographic I would name as the most vulnerable to or affected by police brutality. West coast pot growers are, however, mostly white.

I have no idea why Balko made that choice. Perhaps he thought his target audience would be more persuaded by his argument if he focused on white victims. Perhaps he thought it was an easier and less complicated story to tell. Perhaps, like a lot of libertarians, he doesn't believe racism has a significant impact on society because it would be a market failure. Perhaps those were the people who more readily came to mind. But to talk about police militarization, denial of civil rights, and police brutality in the United States without putting race at the center of both the history and the societal effects leaves a gaping hole in the analysis.

Given that lack of engagement, I also am dubious of Balko's policy prescriptions. His reform suggestions aren't unreasonable, but they stay firmly in the centrist and incrementalist camp and would benefit white people more than black people. Transparency, accountability, and cultural changes are all fine and good, but the cultural change Balko is focused on is less aggressive arrest tactics, more use of mediation, and better physical fitness. I would not object to those things (well, maybe the last, which seemed odd), but we need to have a discussion about police white supremacist organizations, the prevalence of spousal abuse, and the police tendency to see themselves not as public servants but as embattled warriors who are misunderstood by the naive sheep they are defending.

And, of course, you won't find in Rise of the Warrior Cop any thoughtful wrestling with whether there are alternative approaches to community safety, whether punitive rather than restorative justice is effective, or whether crime is a symptom of deeper societal problems we could address but refuse to. The most radical suggestion Balko has is to legalize drugs, which is both the predictable libertarian position and, as we have seen from recent events in the United States, far from the only problem of overcriminalization.

I understand why this book is so frequently mentioned on-line, and its author's political views may make it more palatable to some people than a more race-centered or radical perspective. But I don't think this is the best or most useful book on police violence that one could read today. I hope to find a better one in upcoming reviews.

Rating: 6 out of 10

,

Planet DebianEnrico Zini: Consent links

Teaching consent is ongoing, but it starts when children are very young. It involves both teaching children to pay attention to and respect others' consent (or lack thereof) and teaching children that they should expect their own bodies and their own space to be respected---even by their parents and other relatives. And if children of two or four can be expected to read the nonverbal cues and expressions of children not yet old enough to talk in order to assess whether there is consent, what excuse do full grown adults have?
Small children have no sense of shame or disgust or fear of their bodies. A body is what it is. It does what it does.
About commonly accepted violation of children boundaries
Personal boundaries are guidelines, rules or limits that a person creates to identify reasonable, safe and permissible ways for other people to behave towards them and how they will respond when someone passes those limits.[1] They are built out of a mix of conclusions, beliefs, opinions, attitudes, past experiences and social learning.[2][3] This concept or life skill has been widely referenced in self-help books and used in the counseling profession since the mid-1980s.[4]

Rondam RamblingsAbortion restrictions result in more abortions

Not that this was ever in any serious doubt, but now there is actual data published in The Lancet showing that abortion restrictions increase the number of abortions: In 2015–19, there were 121.0 million unintended pregnancies annually (80% uncertainty interval [UI] 112.8–131.5), corresponding to a global rate of 64 unintended pregnancies (UI 60–70) per 1000 women aged 15–49 years. 61% (58–63)

Rondam RamblingsMark your calendars: I am debating Ken Hovind on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

Planet DebianHolger Levsen: 20200726-lts-survey

Final call to participate in the LTS survey

After 6 years of existence, we, the Debian LTS contributors, wanted to run a survey to learn more about how Debian LTS is used and perceived. Please take a few minutes to participate in the survey at https://surveys.debian.net/...!

Also, please hurry up, the survey will close at the end of July 27th on Samoa, which is in roughly 48h from now.

Planet DebianRuss Allbery: Review: Paladin's Grace

Review: Paladin's Grace, by T. Kingfisher

Publisher: Red Wombat Studio
Copyright: 2020
ASIN: B0848Q8JVW
Format: Kindle
Pages: 399

Stephen was a paladin. Then his god died.

He was a berserker, an unstoppable warrior in the service of his god. Now, well, he's still a berserker, but going berserk when you don't have a god to control the results is not a good idea. He and his brothers were taken in by the Temple of the Rat, where they serve as guards, watch out for each other, and try to get through each day with an emptiness in their souls where a god should be.

Stephen had just finished escorting a healer through some of the poorer parts of town when a woman runs up to him and asks him to hide her. Their awkward simulated tryst is sufficient to fool the two Motherhood priests who were after her for picking flowers from the graveyard. Stephen then walks her home and that would have been the end of it, except that neither could get the other out of their mind.

Despite first appearances, and despite being set in the same world and sharing a supporting character, this is not the promised sequel to Swordheart (which is apparently still coming). It's an entirely different paladin story. T. Kingfisher (Ursula Vernon's nom de plume when writing for adults) has a lot of things to say about paladins! And, apparently, paladin-involved romances.

On the romance front, Kingfisher clearly has a type. The general shape of the story will be familiar from Swordheart and The Wonder Engine: An independent and occasionally self-confident woman with various quirks, a hunky paladin who is often maddeningly dense, and a lot of worrying on both sides about whether the other person is truly interested in them and if their personal liabilities make a relationship a horrible idea. This is not my preferred romance formula (it provokes the occasional muttered "for the love of god just talk to each other"), but I liked this iteration of it better than the previous two, mostly because of Grace.

Grace is a perfumer, a trade she went into by being picked out of a lineup of orphans by a master perfumer for her sense of smell. One of Kingfisher's strengths as a writer is showing someone get lost in their routine day-to-day competence. When mixed with an inherently fascinating profession, this creates a great reading experience. Grace is also an abuse survivor, which made the communication difficulties with Stephen more interesting and subtle. Grace has created space and a life for herself, and her unwillingness to take risks on changes is a deep part of her sense of self and personal safety. As her past is slowly revealed, Kingfisher puts the reader in a position to share Stephen's anger and protectiveness, but then consistently puts Grace's own choices, coping mechanisms, and irritated refusal to be protected back into the center of the story. She has to accept some help as she gets entangled in the investigation of a highly political staged assassination attempt, but both that help and the relationship come on her own terms. It's very well-done.

The plot was enjoyable enough, although it involved a bit too much of constantly rising stakes and turns for the worst for my taste, and the ending had a touch of deus ex machina. Like Kingfisher's other books, though, the delight is in the unexpected details. Stephen knitting socks. Grace's frustrated obsession with why he smells like gingerbread. The beautifully practical and respectful relationship between the Temple of the Rat and Stephen's band of former paladins. (After only two books in which they play a major role, the Temple of the Rat is already one of my favorite fantasy religions.) Everything about Bishop Beartongue. Grace's friend Marguerite. And a truly satisfying ending.

The best part of this book, though, is the way Grace is shown as a complete character in a way that even most books with well-rounded characterization don't manage. Some things she does make the reader's heart ache because of the hints they provide about her past, but they're also wise and effective safety mechanisms for her. Kingfisher gives her space to be competent and prickly and absent-minded. She has a complete life: friends, work, goals, habits, and little rituals. Grace meets someone and falls in love, but one can readily imagine her not falling in love and going on with her life and that result wouldn't be tragic. In short, she feels like a grown adult who has made her own peace with where she came from and what she is doing. The book provides her an opportunity for more happiness and more closure without undermining her independence. I rarely see this in a novel, and even more rarely done this well.

If you haven't read any of Kingfisher's books and are in the mood for faux-medieval city romance involving a perfumer and a bit of political skulduggery, this is a great place to start. If you liked Swordheart, you'll probably like Paladin's Grace; like me, you may even like it a bit more. Recommended, particularly if you want something light and heart-warming.

Rating: 8 out of 10

,

Planet DebianNiels Thykier: Support for Debian packaging files in IDEA (IntelliJ/PyCharm)

I have been using the community editions of IntelliJ and PyCharm for a while now for Python or Perl projects. But it started to annoy me that for Debian packaging bits it would “revert� into a fancy version of notepad. Being fed up with it, I set down and spent the last week studying how to write a plugin to “fix� this.

After a few prototypes, I have now released IDEA-debpkg v0.0.3 (Link to JetBrain’s official plugin listing with screenshots). It provides a set of basic features for debian/control like syntax highlighting, various degree of content validation, folding of long fields, code completion and “CTRL + hover� documentation. For debian/changelog, it is mostly just syntax highlighting with a bit of fancy linking for now. I have not done anything for debian/rules as I noted there is a Makefile plugin, which will have to do for now.

The code is available from github and licensed under Apache-2.0. Contributors, issues/feature requests and pull requests are very welcome. Among things I could help with are:

  • Icons – both for the plugin and for the file types. Currently it is just colored text, which is as far as my artistic skills got with the space provided.
  • Color and text formatting for syntax highlighting.
  • Reports of papercut or features that would be very useful to prioritize.
  • Review of the “CTRL + hoverâ€� documentation. I am hoping for something that is help for new contributors but I am very unlikely to have gotten it right (among other because I wrote most of it to “get it doneâ€� rather than “getting it rightâ€�)

I hope you will take it for spin if you have been looking for a bit of Debian packaging support to your PyCharm or other IDEA IDE. 🙂 Please do file bugs/issues if you run into issues, rough edges or unhelpful documentation, etc.

Planet DebianAndrew Cater: How to use the signed checksum files to verify Debian media images

Following on from the blog post the other day in some sense: someone has asked on the debian-user list: "I do not understand from the given page (https://www.debian.org/CD/verify)  how to use .sign files and gpg in order to check verify the authenticity of debian cds. I understand the part with using sha256sum or sha512sum or md5sum to check whether the files were downloaded correctly."

Distributed with the CD and other media images on Debian CD mirrors, there are files like MD5SUM, MD5SUM.sign, SHA256SUM, SHA256SUM.sign and so on.

SHA512SUM is a plain text list of the SHA512SUMs for each of the files in the directory. SHA512SUM.sign is the GPG-signed version of that file. This allws for non-repudiation - if the signature is valid, then the plain text file has been signed by the owner of that key. Nothing has tampered with the checksums file since it was signed.

After downloading the SHA1SUM, SHA256SUM and SHA512SUM files and the corresponding .sign files from, say, the prime Debian CD mirror at

Assuming that you already have GPG installed: sha256sum and sha512sum are installed by the coreutils package, which Debian installs by default.

gpg --verify SHA512SUMS.sign SHA512SUMS will verify the .sign signature file against the signed file.

gpg --verify SHA512SUMS.sign SHA512SUMS
gpg: Signature made Sun 10 May 2020 00:16:52 UTC
gpg:                using RSA key DF9B9C49EAA9298432589D76DA87E80D6294BE9B


The signature is as given on the Debian CD verification page given above.

You can import that key from the Debian key servers if you wish.

gpg --keyserver keyring.debian.org --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B

You can import the signature for checking from the SKS keyservers which are often more available:

gpg --keyserver pool.sks-keyservers.net --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B 

and you then get:

gpg --verify SHA512SUMS.sign SHA512SUMS
gpg: Signature made Sun 10 May 2020 00:16:52 UTC
gpg:                using RSA key DF9B9C49EAA9298432589D76DA87E80D6294BE9B
gpg: Good signature from "Debian CD signing key " [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B


My own key isn't in the Debian CD signing key ring - but this does now show me that this is a good signature from the primary key fingerprint as given.

Repeating the exercise from the other day and producing a Debian amd64 netinst file using jigdo, I can now check the checksum on the local .iso file against the checksum file distributed by Debian. If they match, it's a good sign that the CD I've generated is bit for bit identical. For my locally generated file:

sha512sum debian-10.4.0-amd64-netinst.iso
ec69e4bfceca56222e6e81766bf235596171afe19d47c20120783c1644f72dc605d341714751341051518b0b322d6c84e9de997815e0c74f525c66f9d9eb4295  debian-10.4.0-amd64-netinst.iso


and for the file checksum as distributed by Debian:

less SHA512SUMS | grep *iso
ec69e4bfceca56222e6e81766bf235596171afe19d47c20120783c1644f72dc605d341714751341051518b0b322d6c84e9de997815e0c74f525c66f9d9eb4295  debian-10.4.0-amd64-netinst.iso


and they match! 

As ever, I hope this blog post will help somebody.

[Edit: Someone has kindly pointed out that grep *iso SHA512SUMS | sha512sum -c will check this more efficiently.]














 


 

Planet DebianCraig Small: 25 Years of Free Software

When did I start writing Free Software, now called Open Source? That’s a tricky question. Does the time start with the first file edited, the first time it compiles or perhaps even some proto-program you use to work out a concept for the real program formed later on.

So using the date you start writing, especially in a era before decent version control systems, is problematic. That is why I use the date of the first release of the first package as the start date. For me, that was Monday 24th July 1995.

axdigi and before

My first released Free Software program was axdigi which was a layer-2 packet repeater for hamradio. This was uploaded to some FTP server, probably UCSD in late July 1995. The README is dated 24th July 1995.

There were programs before this. I had written a closed-source (probably undistributable) driver for the Gracilis PackeTwin serial card and also some sort of primitive wireshark/tcpdump thing for capturing packet radio. Funny thing is that the capture program is the predecessor of both axdigi and a system that was used by a major Australian ISP for their internet billing system.

Choosing Free Software

So you have written something you think others might like, what software license will you use to distribute it? In 1995 it wasn’t that clear. This was the era of strange boutique licenses including ones where it was ok to run the program as a hamradio operator but not a CB radio operator (or at least they tried to work it that way).

A friend of mine and the author of the Linux HAM HOWTO amongst other documents, Terry Dawson, suggested I use GPL or another Free Software license. He explained what this Free Software thing was and said that if you want your program to be the most useful then something like GPL will do it. So I released axdigi under the GPL license and most of my programs since then have used the same license. Something like MIT or BSD licenses would have been fine too, I was just not going to use something closed or hand-crafted.

That was a while ago, I’ve written or maintained many programs since then. I also became a Debian maintainer (23 years so far) and adopted both procps and psmisc which I still maintain as both the Debian developer and upstream to this day.

What Next?

So it has been 25 years or a quarter of a century, what will happen next? Probably more of the same, though I’m not sure I will be maintaining Free Software by the end of the next 25 years (I’ll be over 70 then). Perhaps the singularity will arrive and writing software will be something people only do at Rennie Festivals.

Come to the Festival! There is someone making horseshoes! Other there is a steam engine. See this other guy writing computer programs on a thing called keyboard!

,

Krebs on SecurityThinking of a Cybersecurity Career? Read This

Thousands of people graduate from colleges and universities each year with cybersecurity or computer science degrees only to find employers are less than thrilled about their hands-on, foundational skills. Here’s a look at a recent survey that identified some of the bigger skills gaps, and some thoughts about how those seeking a career in these fields can better stand out from the crowd.

Virtually every week KrebsOnSecurity receives at least one email from someone seeking advice on how to break into cybersecurity as a career. In most cases, the aspirants ask which certifications they should seek, or what specialization in computer security might hold the brightest future.

Rarely am I asked which practical skills they should seek to make themselves more appealing candidates for a future job. And while I always preface any response with the caveat that I don’t hold any computer-related certifications or degrees myself, I do speak with C-level executives in cybersecurity and recruiters on a regular basis and frequently ask them for their impressions of today’s cybersecurity job candidates.

A common theme in these C-level executive responses is that a great many candidates simply lack hands-on experience with the more practical concerns of operating, maintaining and defending the information systems which drive their businesses.

Granted, most people who have just graduated with a degree lack practical experience. But happily, a somewhat unique aspect of cybersecurity is that one can gain a fair degree of mastery of hands-on skills and foundational knowledge through self-directed study and old fashioned trial-and-error.

One key piece of advice I nearly always include in my response to readers involves learning the core components of how computers and other devices communicate with one another. I say this because a mastery of networking is a fundamental skill that so many other areas of learning build upon. Trying to get a job in security without a deep understanding of how data packets work is a bit like trying to become a chemical engineer without first mastering the periodic table of elements.

But please don’t take my word for it. The SANS Institute, a Bethesda, Md. based security research and training firm, recently conducted a survey of more than 500 cybersecurity practitioners at 284 different companies in an effort to suss out which skills they find most useful in job candidates, and which are most frequently lacking.

The survey asked respondents to rank various skills from “critical” to “not needed.” Fully 85 percent ranked networking as a critical or “very important” skill, followed by a mastery of the Linux operating system (77 percent), Windows (73 percent), common exploitation techniques (73 percent), computer architectures and virtualization (67 percent) and data and cryptography (58 percent). Perhaps surprisingly, only 39 percent ranked programming as a critical or very important skill (I’ll come back to this in a moment).

How did the cybersecurity practitioners surveyed grade their pool of potential job candidates on these critical and very important skills? The results may be eye-opening:

“Employers report that student cybersecurity preparation is largely inadequate and are frustrated that they have to spend months searching before they find qualified entry-level employees if any can be found,” said Alan Paller, director of research at the SANS Institute. “We hypothesized that the beginning of a pathway toward resolving those challenges and helping close the cybersecurity skills gap would be to isolate the capabilities that employers expected but did not find in cybersecurity graduates.”

The truth is, some of the smartest, most insightful and talented computer security professionals I know today don’t have any computer-related certifications under their belts. In fact, many of them never even went to college or completed a university-level degree program.

Rather, they got into security because they were passionately and intensely curious about the subject, and that curiosity led them to learn as much as they could — mainly by reading, doing, and making mistakes (lots of them).

I mention this not to dissuade readers from pursuing degrees or certifications in the field (which may be a basic requirement for many corporate HR departments) but to emphasize that these should not be viewed as some kind of golden ticket to a rewarding, stable and relatively high-paying career.

More to the point, without a mastery of one or more of the above-mentioned skills, you simply will not be a terribly appealing or outstanding job candidate when the time comes.

BUT..HOW?

So what should you focus on, and what’s the best way to get started? First, understand that while there are a near infinite number of ways to acquire knowledge and virtually no limit to the depths you can explore, getting your hands dirty is the fastest way to learning.

No, I’m not talking about breaking into someone’s network, or hacking some poor website. Please don’t do that without permission. If you must target third-party services and sites, stick to those that offer recognition and/or incentives for doing so through bug bounty programs, and then make sure you respect the boundaries of those programs.

Besides, almost anything you want to learn by doing can be replicated locally. Hoping to master common vulnerability and exploitation techniques? There are innumerable free resources available; purpose-built exploitation toolkits like Metasploit, WebGoat, and custom Linux distributions like Kali Linux that are well supported by tutorials and videos online. Then there are a number of free reconnaissance and vulnerability discovery tools like Nmap, Nessus, OpenVAS and Nikto. This is by no means a complete list.

Set up your own hacking labs. You can do this with a spare computer or server, or with older hardware that is plentiful and cheap on places like eBay or Craigslist. Free virtualization tools like VirtualBox can make it simple to get friendly with different operating systems without the need of additional hardware.

Or look into paying someone else to set up a virtual server that you can poke at. Amazon’s EC2 services are a good low-cost option here. If it’s web application testing you wish to learn, you can install any number of web services on computers within your own local network, such as older versions of WordPress, Joomla or shopping cart systems like Magento.

Want to learn networking? Start by getting a decent book on TCP/IP and really learning the network stack and how each layer interacts with the other.

And while you’re absorbing this information, learn to use some tools that can help put your newfound knowledge into practical application. For example, familiarize yourself with Wireshark and Tcpdump, handy tools relied upon by network administrators to troubleshoot network and security problems and to understand how network applications work (or don’t). Begin by inspecting your own network traffic, web browsing and everyday computer usage. Try to understand what applications on your computer are doing by looking at what data they are sending and receiving, how, and where.

ON PROGRAMMING

While being able to program in languages like Go, Java, Perl, Python, C or Ruby may or may not be at the top of the list of skills demanded by employers, having one or more languages in your skillset is not only going to make you a more attractive hire, it will also make it easier to grow your knowledge and venture into deeper levels of mastery.

It is also likely that depending on which specialization of security you end up pursuing, at some point you will find your ability to expand that knowledge is somewhat limited without understanding how to code.

For those intimidated by the idea of learning a programming language, start by getting familiar with basic command line tools on Linux. Just learning to write basic scripts that automate specific manual tasks can be a wonderful stepping stone. What’s more, a mastery of creating shell scripts will pay handsome dividends for the duration of your career in almost any technical role involving computers (regardless of whether you learn a specific coding language).

GET HELP

Make no mistake: Much like learning a musical instrument or a new language, gaining cybersecurity skills takes most people a good deal of time and effort. But don’t get discouraged if a given topic of study seems overwhelming at first; just take your time and keep going.

That’s why it helps to have support groups. Seriously. In the cybersecurity industry, the human side of networking takes the form of conferences and local meetups. I cannot stress enough how important it is for both your sanity and career to get involved with like-minded people on a semi-regular basis.

Many of these gatherings are free, including Security BSides events, DEFCON groups, and OWASP chapters. And because the tech industry continues to be disproportionately populated by men, there are also a number cybersecurity meetups and membership groups geared toward women, such as the Women’s Society of Cyberjutsu and others listed here.

Unless you live in the middle of nowhere, chances are there’s a number of security conferences and security meetups in your general area. But even if you do reside in the boonies, the good news is many of these meetups are going virtual to avoid the ongoing pestilence that is the COVID-19 epidemic.

In summary, don’t count on a degree or certification to prepare you for the kinds of skills employers are going to understandably expect you to possess. That may not be fair or as it should be, but it’s likely on you to develop and nurture the skills that will serve your future employer(s) and employability in this field.

I’m certain that readers here have their own ideas about how newbies, students and those contemplating a career shift into cybersecurity can best focus their time and efforts. Please feel free to sound off in the comments. I may even update this post to include some of the better recommendations.

CryptogramFriday Squid Blogging: Introducing the Seattle Kraken

The Kraken is the name of Seattle's new NFL franchise.

I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it's hard to describe individual players.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: anytime 0.3.8: Minor Maintenance

A new minor release of the anytime package arrived on CRAN overnight. This is the nineteenth release, and it comes just over six months after the previous release giving further indicating that we appear to have reached a nice level of stability.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release mostly plays games with CRAN. Given the lack of specification for setups on their end, reproducing test failures remains, to put it mildly, “somewhat challenging”. So we eventually gave up—and weaponed up once more and now explicitly test for the one distribution where tests failed (when they clearly passed everywhere else). With that we now have three new logical predicates for various Linux distribution flavours, and if that dreaded one is seen in one test file the test is skipped. And with that we now score twelve out of twelve OKs. This being a game of cat and mouse, I am sure someone somewhere will soon invent a new test…

The full list of changes follows.

Changes in anytime version 0.3.8 (2020-07-23)

  • A small utility function was added to detect the Linux distribution used in order to fine-tune tests once more.

  • Travis now uses Ubuntu 'bionic' and R 4.0.*.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

LongNowThe Comet Neowise as seen from the ISS

For everyone who cannot see the Comet Neowise with their own eyes this week — or just wants to see it from a higher perch — this video by artist Seán Doran combines 550 NASA images from the International Space Station into a real-time view of the comet from 250 miles above Earth’s surface and 17,500 mph.

Planet DebianMike Gabriel: Ayatana Indicators / IDO - Menu Rendering Fixed with vanilla GTK-3+

At DebConf 17 in Montreal, I gave a talk about Ayatana Indicators [1] and the project's goal to continue the — by then already dropped out of maintenance — Ubuntu Indicators in a separate upstream project, detached from Ubuntu and its Ubuntu'isms.

Stalling

The whole Ayatana Indicators project received a bit of a show stopper by the fact that the IDO (Indicator Display Object) rendering was not working in vanilla GTK-3 without a certain patch [2] that only Ubuntu has in their GTK-3 package. Addressing GTK developers upstream some years back (after GTK 3.22 had already gone into long term maintenance mode) and asking for a late patch acceptance did not work out (as already assumed). Ayatana Indicators stalled at a level of 90% actually working fine, but those nice and shiny special widgets, like the calendar widget, the audio volume slider widgets, switch widgets, etc. could not be rendered appropriately in GTK based desktop environments (e.g. via MATE Indicator Applet) on other distros than Ubuntu.

I never really had the guts to sit down without a defined ending and find a patch / solution to this nasty problem. Ayatana Indicators stalled as a whole. I kept it alive and defended its code base against various GLib and what-not deprecations and kept it in Debian, but the software was actually partially broken / dysfunctional.

Taking the Dog for a Walk and then It Became all Light+Love

Several days back, I received a mail from Robert Tari [3]. I was outside on a hike with our dog and thought, ah well, let's check emails... I couldn't believe what I read then, 15 seconds later. I could in fact, hardly breathe...

I have known Robert from earlier email exchanges. Robert maintains various "little" upstream projects, like e.g. Caja Rename, Odio, Unity Mail, etc. that I have looked into earlier regarding Debian packaging. Robert is also a Manjaro contributor and he has been working on bringing Ayatana Indicators to Manjaro MATE. In the early days, without knowing Robert, I even forked one of his projects (indicator-notification) and turned it into an Ayatana Indicator.

Robert and I also exchanged some emails about Ayatana Indicators already a couple of weeks ago. I got the sense of him maybe being up to something already then. Oh, yeah!!!

It turned out that Robert and I share the same "love" for the Ubuntu Indicators concept [4]. From his email, it became clear that Robert had spent the last 1-2 weeks drowned in the Ayatana IDO and libayatana-indicator code and worked him self through the bowels of it in order to understand the code concept of Indicators to its very depth.

When emerging back from his journey, he presented me (or rather: the world) a patch [5] against libayatana-indicator that makes it possible to render IDO objects even if a vanilla GTK-3 is installed on the system. This patch is a game changer for Indicator lovers.

When Robert sent me his mail pointing me to this patch, I think, over the past five years, I have never felt more excited (except from the exact moment of getting married to my wife two-to-three years ago) than during that moment when my brain tried to process his email. "Like a kid on Christmas Eve...", Robert wrote in one of his later mails to me. Indeed, like a "kid on Christmas Eve", Robert.

Try It Out

As a proof of all this to the Debian people, I have just done the first release of ayatana-indicator-datetime and uploaded it to Debian's NEW queue. Robert is doing the same for Manjaro. The Ayatana Indicator Sound will follow after my vacation.

For fancy widget rendering in Ayatana Indicator's system indicators, make sure you have libayatana-indicator 0.7.0 or newer installed on your system.

Credits

One of the biggest thanks ever I send herewith to Robert Tari! Robert is now co-maintainer of Ayatana Indicators. Welcome! Now, there is finally a team of active contributors. This is so delightful!!!

References

P.S.

Expect more Ayatana Indicators to appear in your favourite distro soon...

CryptogramUpdate on NIST's Post-Quantum Cryptography Program

NIST has posted an update on their post-quantum cryptography program:

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This "selection round" will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

[...]

For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

"We're calling these seven the finalists," Moody said. "For the most part, they're general-purpose algorithms that we think could find wide application and be ready to go after the third round."

The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

"The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures," he said. "But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we'll find a way to look at newer approaches too."

Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

Planet DebianRaphaël Hertzog: The Debian Handbook has been updated for Debian 10

Better late than never as we say… thanks to the work of Daniel Leidert and Jorge Maldonado Ventura, we managed to complete the update of my book for Debian 10 Buster.

You can get the electronic version on debian-handbook.info or the paperback version on lulu.com. Or you can just read it online.

Translators are busy updating their translations, with German and Norvegian Bokmal leading the way…

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

Kevin RuddCNN: Cold War 1.5

INTERVIEW VIDEO
TV INTERVIEW
CONNECT THE WORLD, CNN
24 JULY 2020

Topics: US-China relations, Australia’s coronavirus second wave

BECKY ANDERSON: Kevin Rudd is the president of the Asia Society Policy Institute and he’s joining us now from the Sunshine Coast in Australia. It’s great to have you. This type of rhetoric you say is not new. But it does feel like we are approaching a precipitous point.

KEVIN RUDD: Well Becky, I think there’s been a lot of debate in recent months as to whether we’re on the edge of a new Cold War between China and the United States. Rather than being Cold War 2.0, I basically see it as Cold War 1.5. That is, it’s sliding in that direction, and sliding rapidly in that direction. But we’re by no means there yet. And one of the reasons we’re not there yet is because of the continued depth and breadth of the economic relationship between China and the United States, which was never the case, in terms of the historical relationship, between the United States and the Soviet Union during the first Cold War. That may change, but that I think is where we are right now.

ANDERSON: We haven’t seen an awful lot of retaliation nor very much of a narrative really from Beijing in response to some of this US anti-China narrative. What do you expect next from Beijing?

RUDD: Well, in terms of the consulate general, I think as night follows day, you’re likely to see either a radical reduction in overall American diplomatic staff numbers in China and consular staff numbers, or the direct reciprocal action, which would close for example, the US Consulate General in perhaps Chengdu or Wuhan or in Shenyang, somewhere like that. But this as you said before in your introduction, Becky, forms just one part of a much broader deterioration relationship. I’ve been observing the US-China relationship for the better part of 35 years. Really, since Nixon and Kissinger first went to Beijing in 1971/1972. This is the low point, the lowest point of the US-China relationship in now half a century. And it’s only heading in one direction. Is there an exit ramp? Open question. But the dynamics both in Beijing and in Washington are pulling this relationship right apart, and that leaves third countries in an increasingly difficult position.

ANDERSON: Yes, and I wanted to talk to you about that because Australia is continually torn between the sort of economic relationship with China that it has, and its strategic partnership with the US. We have seen the US to all intents and purposes, leaning on the UK over Huawei. How should other countries engage with China going forward?

RUDD: Well, one thing I think is to understand that Xi Jinping’s China is quite different from the China of Hu Jintao, Jiang Zemin or even Deng Xiaoping. And since Xi Jinping took over in 2012/2013, it’s a much more assertive China, right across the board. And even in this COVID reality of 2020, we see not just the Hong Kong national security legislation, we see new actions by China in the South China Sea, against Taiwan, against Japan, in the East China Sea, on the Sino-Indian border, and the frictions with Canada, Australia, the United Kingdom – you’ve just mentioned – and elsewhere as well. So, this is a new, assertive China – quite different from the one we’ve seen in the past. So, your question is entirely valid – how do, as it were, the democracies of Asia and the democracies of Europe and elsewhere respond to this new phenomenon on the global stage? I think it’s along these lines. Number one, be confident in the position which democracies have, that we believe in universal values, and human rights and democracy. And we’re not about to change. Number two, many of us, whether we’re in Asia or Europe, or longstanding allies, the United States, that’s not about change. But number three, to make it plain to our Chinese friends that on a reciprocal basis, we wish to have a mutually productive trade, investment, and capital markets relationship. And four, the big challenges of global governance – whether it’s pandemics, or climate change, or stability of global financial markets, and the current crisis we have around the world – where it is incumbent on all of us to work together. I think those four principles form a basis for us dealing with Xi Jinping’s China.

ANDERSON: Kevin, do you see this as a Cold War?

RUDD: As I said before, we’re trending that way. As I said, the big difference between the Soviet Union and the United States is that China and the United States are deeply economically in mesh and have become that way over the last 20 years or so. And that never was the case in the old Cold War. Secondly, in the old Cold War, we basically had a strategic relationship of mutually assured destruction, which came to the flashpoint of the Cuban Missile Crisis in the early 1960s. That’s not the case either. But I’ve got to say in all honesty, it’s trending in a fundamentally negative direction, and when we start to see actions like shutting down each other’s consulate generals, that does remind me of where we got to in the last Cold War as well. There should be an exit ramp, but it’s going to require a new strategic framework for the US-China relationship, based on what I describe as: manage strategic competition between these two powers, where each side’s red lines are well recognized, understood and observed – and competition occurs, as it were, in all other domains. At present, we don’t seem to have parameters or red lines at all.

ANDERSON: And we might have had this discussion four or five months ago. The new layer of course, is the coronavirus pandemic and the way that the US has responded which you say has provided an opportunity for the Chinese to steal a march on the US with regard to its position and its power around the world. Is Beijing, do you think – if you believe that there is a power vacuum at present after this coronavirus response – is Beijing taking advantage of that vacuum?

RUDD: Well, when the coronavirus broke out, China was, by definition, in a defensive position, because the virus came from Wuhan, and therefore, as the virus then spread across the world, China found itself in a deeply problematic position – not just the damage to its economy at home – but frankly its reputation abroad as well. However, President Trump’s America has demonstrated to the world that a) his administration can’t handle the virus within the United States itself, and b) there has been a phenomenal lack of American global leadership in dealing with the public health and global economic dimensions of – let’s call it the COVID-19 crisis – across the world. So, the argument that I’m attracted to is that both these great powers have been fundamentally damaged by the coronavirus crisis that has inflicted the world. So the challenge for the future is whether in fact we a) see a change in administration in Washington with Biden, and secondly, whether a democrat administration will choose to reassert American global leadership through the institutions of global governance, where frankly, the current administration has left so many vacuums across the UN system and beyond it. And that remains the open question – which I think the international community is focusing on – as we move towards that event in November, when the good people the United States cast their ballot.

ANDERSON: Yeah, no fascinating. I’ll just stick to the coronavirus for a final question for you and thank you for this sort of wide-ranging discussion. Australia, of course, applauded for its ability to act fast and flatten its coronavirus curve back in April. That has all been derailed. We’ve seen a second wave. It’s worse than the first. Earlier this week, the country reporting its worst day since the pandemic began despite new tough restrictions. What do you believe it will take to flatten the curve again? And are you concerned that the situation in Australia is slipping out of control?

RUDD: What the situation in the state of Victoria and the city of Melbourne in particular demonstrates is what we see in so many countries around the world, which is the ease with which a second wave effect can be made manifest. It’s not just of course in Australia. We see evidence of this in Hong Kong. We see it in other countries, where in fact, the initial management of the crisis was pretty effective. What the lesson of Melbourne, and the lesson of Victoria is for all of us, is that when it comes to maintaining the disciplines of social distancing, of proper quarantine arrangements, as well as contact tracing and the rest, that there is no, as it were, release of our discipline applied to these challenges. And in the case of Victoria, it was in Melbourne – it was simply a poor application of quarantine arrangements in a single hotel, or Australians returning from elsewhere in the world, that led to this community-level transmission. And that can happen in the northern part of the United Kingdom. It can happen in regional France; it can happen anywhere in Germany. What’s the message? Vigilance across the board, until we can eliminate this thing. We’ve still got a lot to learn from Jacinda Ardern’s success in New Zealand in virtually eliminating this virus altogether.

ANDERSON: With that, we’re going to leave it there. Kevin Rudd, former Prime Minister of Australia, it’s always a pleasure. Thank you very much indeed for joining us.

RUDD: Good to be with you.

ANDERSON: Extremely important subject, US-China relations at present.

The post CNN: Cold War 1.5 appeared first on Kevin Rudd.

Planet DebianEvgeni Golov: Building documentation for Ansible Collections using antsibull

In my recent post about building and publishing documentation for Ansible Collections, I've mentioned that the Ansible Community is currently in the process of making their build tools available as a separate project called antsibull instead of keeping them in the hacking directory of ansible.git.

I've also said that I couldn't get the documentation to build with antsibull-docs as it wouldn't support collections yet. Thankfully, Felix Fontein, one of the maintainers of antsibull, pointed out that I was wrong and later versions of antsibull actually have partial collections support. So I went ahead and tried it again.

And what should I say? Two bug reports by me and four patches by Felix Fontain later I can use antsibull-docs to generate the Foreman Ansible Modules documentation!

Let's see what's needed instead of the ugly hack in detail.

We obviously don't need to clone ansible.git anymore and install its requirements manually. Instead we can just install antsibull (0.17.0 contains all the above patches). We also need Ansible (or ansible-base) 2.10 or never, which currently only exists as a pre-release. 2.10 is the first version that has an ansible-doc that can list contents of a collection, which antsibull-docs requires to work properly.

The current implementation of collections documentation in antsibull-docs requires the collection to be installed as in "Ansible can find it". We had the same requirement before to find the documentation fragments and can just re-use the installation we do for various other build tasks in build/collection and point at it using the ANSIBLE_COLLECTIONS_PATHS environment variable or the collections_paths setting in ansible.cfg1. After that, it's only a matter of passing --use-current to make it pick up installed collections instead of trying to fetch and parse them itself.

Given the main goal of antisibull-docs collection is to build documentation for multiple collections at once, it defaults to place the generated files into <dest-dir>/collections/<namespace>/<collection>. However, we only build documentation for one collection and thus pass --squash-hierarchy to avoid this longish path and make it generate documentation directly in <dest-dir>. Thanks to Felix for implementing this feature for us!

And that's it! We can generate our documentation with a single line now!

antsibull-docs collection --use-current --squash-hierarchy --dest-dir ./build/plugin_docs theforeman.foreman

The PR to switch to antsibull is open for review and I hope to get merged in soon!

Oh and you know what's cool? The documentation is now also available as a preview on ansible.com!


  1. Yes, the paths version of that setting is deprecated in 2.10, but as we support older Ansible versions, we still use it. 

Planet DebianMartin Michlmayr: beancount2ledger 1.1 released

Martin Blais recently announced that he'd like to re-organize the beancount code and split out some functionality into separate projects, including the beancount to ledger/hledger conversion code previously provided by bean-report.

I agreed to take on the maintenance of this code and I've now released beancount2ledger, a beancount to ledger/hledger converter.

You can install beancount2ledger with pip:

pip3 install beancount2ledger

Please report issues to the GitHub tracker.

There are a number of outstanding issues I'll fix soon, but please report any other issues you encounter.

Note that I'm not very familiar with hledger. I intend to sync up with hledger author Simon Michael soon, but please file an issue if you notice any problems with the hledger conversion.

Version 1.1 contains a number of fixes compared to the latest code in bean-report:

1.1 (2020-07-24)

  • Preserve metadata information (issue #3)
  • Preserve cost information (lot dates and lot labels/notes) (issue #5)
  • Avoid adding two prices in hledger (issue #2)
  • Avoid trailing whitespace in account open declarations (issue #6)
  • Fix indentation issue in postings (issue #8)
  • Fix indentation issue in price entries
  • Drop time information from price (P) entries
  • Add documentation
  • Relicense under GPL-2.0-or-later (issue #1)

1.0 (2020-07-22)

  • Split ledger and hledger conversion from bean-report into a standalone tool
  • Add man page for beancount2ledger(1)

Worse Than FailureError'd: Free Coff...Wait!

"Hey! I like free coffee! Let me just go ahead and...um...hold on a second..." writes Adam R.

 

"I know I have a lot of online meetings these days but I don't remember signing up for this one," Ged M. wrote.

 

Peter G. writes, "The $60 off this $1M nylon bag?! What a deal! I should buy three of them!"

 

"So, because it's free, it's null, so I guess that's how Starbucks' app logic works?" James wrote.

 

Graham K. wrote, "How very 'zen' of National Savings to give me this particular error when I went to change my address."

 

"I'm not sure I trust "scenem3.com" with their marketing services, if they send out unsolicited template messages. (Muster is German for template, Max Muster is our equivalent of John Doe.)" Lukas G. wrote.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 153 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 153. This version includes the following changes:

[ Chris Lamb ]

* Drop some legacy argument styles; --exclude-directory-metadata and
  --no-exclude-directory-metadata have been replaced with
  --exclude-directory-metadata={yes,no}.

* Code improvements:

  - Make it easier to navigate the main.py entry point.
  - Use a relative import for get_temporary_directory in diffoscope.diff.
  - Rename bail_if_non_existing to exit_if_paths_do_not_exist.
  - Rewrite exit_if_paths_do_not_exist to not check files multiple times.

* Documentation improvements:

  - CONTRIBUTING.md:

    - Add a quick note about adding/suggesting new options.
    - Update and expand the release process documentation.
    - Add a reminder to regenerate debian/tests/control.

  - README.rst:

    - Correct URL to build job on Jenkins.
    - Clarify and correct contributing info to point to salsa.debian.org.

You find out more by visiting the project homepage.

,

Planet DebianDima Kogan: Finding long runs of "notable" data in a log

Here's yet another instance where the data processing I needed done could be acomplished entirely in the shell, with vnlog tools.

I have some time-series data in a text table. Via some join and filter operations, I have boiled down this table to a sequence of time indices where something interesting happened. For instance let's say it looks like this:

t.vnl

# time
1976
1977
1978
1979
1980
1986
1987
1988
1989
2011
2012
2013
2014
2015
4679
4680
4681
4682
4683
4684
4685
4686
4687
7281
7282
7283
7291
7292
7293

I'd like to find the longest contiguous chunk of time where the interesting thing kept happening. How? Like this!

$ < t.vnl vnl-filter -p 'time,d=diff(time)' |
          vnl-uniq -c -f -1 |
          vnl-filter 'd==1' -p 'count=count+1,time=time-1' |
          vnl-sort -nrk count |
          vnl-align
# count time
9       4679
5       2011
5       1976
4       1986
3       7291
3       7281

Bam! So the longest run was 9-frames-long, starting at time = 4679.

Planet DebianSean Whitton: keyboardingupdates

Marks and mark rings in GNU Emacs

I recently attempted to answer the question of whether experienced Emacs users should consider partially or fully disabling Transient Mark mode, which is (and should be) the default in modern GNU Emacs.

That blog post was meant to be as information-dense as I could make it, but now I’d like to describe the experience I have been having after switching to my custom pseudo-Transient Mark mode, which is labelled “mitigation #2” in my older post.

In summary: I feel like I’ve uncovered a whole editing paradigm lying just beneath the surface of the editor I’ve already been using for years. That is cool and enjoyable in itself, but I think it’s also helped me understand other design decisions about the basics of the Emacs UI better than before – in particular, the ideas behind how Emacs chooses where to display buffers, which were very frustrating to me in the past. I am now regularly using relatively obscure commands like C-x 4 C-o. I see it! It all makes sense now!

I would encourage everyone who has never used Emacs without Transient Mark mode to try turning it off for a while, either fully or partially, just to see what you can learn. It’s fascinating how it can come to seem more convenient and natural to pop the mark just to go back to the end of the current line after fixing up something earlier in the line, even though doing so requires pressing two modified keys instead of just C-e.

Eshell

I was amused to learn some years ago that someone was trying to make Emacs work as an X11 window manager. I was amazed and impressed to learn, more recently, that the project is still going and a fair number of people are using it. Kudos! I suspect that the basic motivation for such projects is that Emacs is a virtual Lisp machine, and it has a certain way of managing visible windows, and people would like to be able to bring both of those to their X11 window management.

However, I am beginning to suspect that the intrinsic properties of Emacs buffers are tightly connected to the ways in which Emacs manages visible windows, and the intrinsic properties of Emacs buffers are at least as fundamental as its status as a virtual Lisp machine. Thus I am not convinced by the idea of trying to use Emacs’ ways of handling visible windows to handle windows which do not contain Emacs buffers. (but it’s certainly nice to learn it’s working out for others)

The more general point is this. Emacs buffers are as fundamental to Emacs as anything else is, so it seems unlikely to be particularly fruitful to move something typically done outside of Emacs into Emacs, unless that activity fits naturally into an Emacs buffer or buffers. Being suited to run on a virtual Lisp machine is not enough.

What could be more suited to an Emacs buffer, however, than a typical Unix command shell session? By this I mean things like running commands which produce text output, and piping this output between commands and into and out of files. Typically the commands one enters are sort of like tiny programs in themselves, even if there are no pipes involved, because you have to spend time determining just what options to pass to achieve what you want. It is great to have all your input and output available as ordinary buffer text, navigable just like all your other Emacs buffers.

Full screen text user interfaces, like top(1), are not the sort of thing I have in mind here. These are suited to terminal emulators, and an Emacs buffer makes a poor terminal emulator – what you end up with is a sort of terminal emulator emulator. Emacs buffers and terminal emulators are just different things.

These sorts of thoughts lead one to Eshell, the Emacs Shell. Quoting from its documentation:

The shell’s role is to make [system] functionality accessible to the user in an unformed state. Very roughly, it associates kernel functionality with textual commands, allowing the user to interact with the operating system via linguistic constructs. Process invocation is perhaps the most significant form this takes, using the kernel’s fork' andexec’ functions.

Emacs is … a user application, but it does make the functionality of the kernel accessible through an interpreted language – namely, Lisp. For that reason, there is little preventing Emacs from serving the same role as a modern shell. It too can manipulate the kernel in an unpredetermined way to cause system changes. All it’s missing is the shell-ish linguistic model.

Eshell has been working very well for me for the past month or so, for, at least, Debian packaging work, which is very command shell-oriented (think tools like dch(1)).

The other respects in which Eshell is tightly integrated with the rest of Emacs are icing on the cake. In particular, Eshell can transparently operate on remote hosts, using TRAMP. So when I need to execute commands on Debian’s ftp-master server to process package removal requests, I just cd /ssh:fasolo: in Eshell. Emacs takes care of disconnecting and connecting to the server when needed – there is no need to maintain a fragile SSH connection and a shell process (or anything else) running on the remote end.

Or I can cd /ssh:athena\|sudo:root@athena: to run commands as root on the webserver hosting this blog, and, again, the text of the session survives on my laptop, and may be continued at my leisure, no matter whether athena reboots, or I shut my laptop and open it up again the next morning. And of course you can easily edit files on the remote host.

Planet DebianSean Whitton: Kinesis Advantage 2 for heavy Emacs users

A little under two months ago I invested in an expensive ergonomic keyboard, a Kinesis Advantage 2, and set about figuring out how to use it most effectively with Emacs. The default layout for the keyboard is great for strong typists who control their computer mostly with their mouse, but less good for Emacs users, who are strong typists that control their computer mostly with their keyboard.

It took me several tries to figure out where to put the ctrl, alt, backspace, delete, return and spacebar keys, and aside from one forum post I ran into, I haven’t found anyone online who came up with anything much like what I’ve come up with, so I thought I should probably write up a blog post.

The mappings

  • The pairs of arrow keys under the first two fingers of each hand become ctrl and alt/meta keys. This way there is a ctrl and alt/meta key for each hand, to reduce the need for one-handed chording.

    I bought the keyboard expecting to have all modifier keys on my thumbs. However, (i) only the two large thumb keys can be pressed without lifting your hand away from the home row, or stretching in a way that’s not healthy; and (ii) only the outermost large thumb key can be comfortably held down as a modifier.

    It takes a little work to get used to using the third and fifth fingers of one hand to hold down both alt/meta and shift, for typing core Emacs commands like M-^ and M-@, but it does become natural to do so.

  • The arrow keys are moved to the four ctrl/alt/super keys which run along the top of the thumb key areas.

  • The outermost large thumb key of each hand becomes a spacebar. This means it is easy to type C-u C-SPC with the right hand while the left hand holds down control, and sequences like C-x C-SPC and C-a C-SPC C-e with the left hand with the right hand holding down control.

    It took me a while to realise that it is not wasteful to have two spacebars.

  • The inner large thumb keys become backspace and return.

  • The international key becomes delete.

    Rarely needed for Emacs users, as we have C-d, so initially I just had no delete key, but soon came to regret this when trying to edit text in web forms.

  • Caps Lock becomes Super, but remains caps lock on the keypad layer.

    See my rebindings for ordinary keyboards for some discussion of having just a single Super key.

Sequences of two modified keys on different halves of the keyboard

It is desirable to input sequences like C-x C-o without switching which hand is holding the control key. This requires one-handed chording, but this is trecherous when the modifier keys not under the thumbs, because you might need to press the modified key with the same finger that’s holding the modifier!

Fortunately, most or all sequences of two keys modified by ctrl or alt/meta, where each of the two modifier keys is typed by a different hand, begin with C-c, C-x or M-g, and the left hand can handle each of these on its own. This leaves the right hand completely free to hit the second modified key while the left hand continues to hold down the modifier.

My rebindings for ordinary keyboards

I have some rebindings to make Emacs usage more ergonomic on an ordinary keyboard. So far, my Kinesis Advantage setup is close enough to that setup that I’m not having difficulty switching back and forth from my laptop keyboard.

The main difference is for sequences of two modified keys on different halves of the keyboard – which of the two modified keys is easiest to type as a one-handed chord is different on the Kinesis Advantage than on my laptop keyboard. At this point, I’m executing these sequences without any special thought, and they’re rare enough that I don’t think I need to try to determine what would be the most ergonomic way to handle them.

Krebs on SecurityNY Charges First American Financial for Massive Data Leak

In May 2019, KrebsOnSecurity broke the news that the website of mortgage title insurance giant First American Financial Corp. had exposed approximately 885 million records related to mortgage deals going back to 2003. On Wednesday, regulators in New York announced that First American was the target of their first ever cybersecurity enforcement action in connection with the incident, charges that could bring steep financial penalties.

First American Financial Corp.

Santa Ana, Calif.-based First American [NYSE:FAF] is a leading provider of title insurance and settlement services to the real estate and mortgage industries. It employs some 18,000 people and brought in $6.2 billion in 2019.

As first reported here last year, First American’s website exposed 16 years worth of digitized mortgage title insurance records — including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and drivers license images.

The documents were available without authentication to anyone with a Web browser.

According to a filing (PDF) by the New York State Department of Financial Services (DFS), the weakness that exposed the documents was first introduced during an application software update in May 2014 and went undetected for years.

Worse still, the DFS found, the vulnerability was discovered in a penetration test First American conducted on its own in December 2018.

“Remarkably, Respondent instead allowed unfettered access to the personal and financial data of millions of its customers for six more months until the breach and its serious ramifications were widely publicized by a nationally recognized cybersecurity industry journalist,” the DFS explained in a statement on the charges.

A redacted screenshot of one of many millions of sensitive records exposed by First American’s Web site.

Reuters reports that the penalties could be significant for First American: The DFS considers each instance of exposed personal information a separate violation, and the company faces penalties of up to $1,000 per violation.

In a written statement, First American said it strongly disagrees with the DFS’s findings, and that its own investigation determined only a “very limited number” of consumers — and none from New York — had personal data accessed without permission.

In August 2019, the company said a third-party investigation into the exposure identified just 32 consumers whose non-public personal information likely was accessed without authorization.

When KrebsOnSecurity asked last year how long it maintained access logs or how far back in time that review went, First American declined to be more specific, saying only that its logs covered a period that was typical for a company of its size and nature.

But in Wednesday’s filing, the DFS said First American was unable to determine whether records were accessed prior to Jun 2018.

“Respondent’s forensic investigation relied on a review of web logs retained from June 2018 onward,” the DFS found. “Respondent’s own analysis demonstrated that during this 11-month period, more than 350,000 documents were accessed without authorization by automated ‘bots’ or ‘scraper’ programs designed to collect information on the Internet.

The records exposed by First American would have been a virtual gold mine for phishers and scammers involved in so-called Business Email Compromise (BEC) scams, which often impersonate real estate agents, closing agencies, title and escrow firms in a bid to trick property buyers into wiring funds to fraudsters. According to the FBI, BEC scams are the most costly form of cybercrime today.

First American’s stock price fell more than 6 percent the day after news of their data leak was published here. In the days that followed, the DFS and U.S. Securities and Exchange Commission each announced they were investigating the company.

First American released its first quarter 2020 earnings today. A hearing on the charges alleged by the DFS is slated for Oct. 26.

Kevin RuddBloomberg: US-China Relations Worsen

E&OE TRANSCRIPT
BLOOMBERG
23 JULY 2020

TOM MACKENZIE: Let’s start with your reaction to this latest sequence of events.

KEVIN RUDD: Well, structurally, the US-China relationship is in the worst state it’s been in about 50 years. It’s 50 years next year since Henry Kissinger undertook his secret diplomacy in Beijing. So, this relationship is in trouble strategically, militarily, diplomatically, politically, economically, trade, investment technology, and of course, in the wonderful world of espionage as well. And so, whereas this is a surprising move against a Chinese consulate general of the United States, it certainly fits within the fabric of a structural deterioration relationship underway now for quite a number of years.

MACKENZIE: So far, China, Beijing has taken what many would argue would be a proportionate response to actions by the US, at least in the last few months. Is there an argument now that this kind of action, calling for the closure of this consulate in Houston, will strengthen the hands of the hardliners here in Beijing, and it will force them to take a stronger response? What do you think ultimately will be the material reaction then from Beijing

RUDD: Well, on this particular consulate general closure, I think, as night follows day, you’ll see a Chinese decision to close an American consulate general in China. There are a number already within China. I think you would look to see what would happen with the future of the US Consulate General in say Shenyang up in the northeast, or in Chengdu in the west, because this tit-for-tat is alive very much in the way in which China views the necessity politically, to respond in like form to what the Americans have done. But overall, the Chinese leadership are a very hard-bitten, deeply experienced Marxist-Leninist leadership, who look at the broad view of the US-China relationship. They see it as structurally deteriorating. They see it in part as an inevitable reaction to China’s rise. And if you look carefully at some of the internal statements by Xi Jinping in recent months, the Chinese system is gearing up for what it describes internally as 20 to 30 years of growing friction in the US-China relationship, and that will make life difficult for all countries who have deep relationships with both countries.

MACKENZIE: Mike Pompeo, the US Secretary of State was in London talking to his counterparts there, and he called for a coalition with allies. Presumably, that will include at some point Australia, though we have yet to hear from their leaders about the sense of a coalition against China. Do you think this is significant? Do you think this is a shift in US policy? How much traction do you think Mike Pompeo and the Trump administration will get in forming a coalition to push back against China?

RUDD: Well, the truth is, most friends and allies of the United States, are waiting to see what happens in the US presidential election. There was a general expectation that President Trump will not be re-elected. Therefore, the attitude of friends and allies of the United States: well, what will be the policy cost here of an incoming Biden administration, in relation to China, and in critical areas like the economy, trade, investment technology and the rest? Bear in mind, however, that what has happened under Xi Jinping’s leadership, since he became leader of the Chinese Communist Party the end of 2012, is that China has progressively become more assertive in the promotion of its international interests, whether it’s in the South China Sea, the East China Sea, whether it’s in Europe, whether it’s the United States, whether its countries like Australia. And therefore, what is happening is that countries who are now experiencing this for the first time – the real impact of an assertive Chinese foreign policy – are themselves beginning to push back. And so whether it’s with American leadership or not, the bottom line is that what I now observe is that countries in Europe, democracies in Europe, democracies in Asia, are increasingly in discussion with one another about how do you deal with the emerging China challenge to the international rules based system. That I think is happening as a matter of course, whether or not Mike Pompeo seeks to lead it or not.

DAVID INGLES: Mr Rudd I’d like to pick it up there. David here, by the way, in Hong Kong. In terms of what do you think is the proper way to engage an emerging China? You’ve dealt with them at many levels. You understand how sensitive their past is to their leadership, and how that shapes where they think their country should be, their ambitions. How should the world – let alone the US, let’s set that aside – how should the rest of the world engage an emerging China?

RUDD: Well you’re right. In one capacity or another, I’ve been dealing with China for the last 35 years, since I first went to work there as an Australian embassy official way back in the 1980s. It’s almost the Mesolithic period now. And I’ve seen the evolution of China’s international posture over that period of time. And certainly, there is a clear dividing line with the emergence of Xi Jinping’s leadership, where China has ceased to hide its strength, bide its time, never to take the lead – that was Deng Xiaoping’s axiom for the past. And instead, we see a China under this new leadership, which is infinitely more assertive. And so my advice to governments when they asked me about this, is that governments need to have a coordinated China strategy themselves – just as China has a strategy for dealing with the rest of the world including the major countries and economies within it. But the principles of those strategies should be pretty basic. Number one, those of us who are democracies, we simply make it plain to the Chinese leadership that that’s our nature, our identity, and we’re not about change as far as our belief in universal human rights and values are concerned. Number two, most of us are allies with the United States for historical reasons, and current reasons as well. And that’s not going to change either. Number three, we would like to however, prosecute a mutually beneficial trade and investment and capital markets relationship with you in China, that works for both of us on the basis of reciprocity in each other’s markets. And four, there are so many global challenges out there at the moment – from the pandemic, through to global climate change action, and onto financial markets stability – which require us and China to work together in the major forums of the world like the G20. I think those principles should govern everyone’s approach to how you deal with this emerging and different China.

The post Bloomberg: US-China Relations Worsen appeared first on Kevin Rudd.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 202.00 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

June was the last month of Jessie LTS which ended on 2020-06-20. If you still need to run Jessie somewhere, please read the post about keeping Debian 8 Jessie alive for longer than 5 years.
So, as (Jessie) LTS is dead, long live the new LTS, Stretch LTS! Stretch has received its last point release, so regular LTS operations can now continue.
Accompanying this, for the first time, we have prepared a small survey about our users and contributors, who they are and why they are using LTS. Filling out the survey should take less than 10 minutes. We would really appreciate if you could participate in the survey online! On July 27th 2020 we will close the survey, so please don’t hesitate and participate now! After that, there will be a followup with the results.

The security tracker for Stretch LTS currently lists 29 packages with a known CVE and the dla-needed.txt file has 44 packages needing an update in Stretch LTS.

Thanks to our sponsors

New sponsors are in bold.

We welcome CoreFiling this month!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianEnrico Zini: Build Qt5 cross-builder with raspbian sysroot: compiling with the sysroot (continued)

Lite extra ball, from https://www.flickr.com/photos/st3f4n/143623902

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The previous rounds of attempts ended in one issue too many to investigate in the allocated hourly budget.

Andreas Gruber wrote:

Long story short, a fast solution for the issue with EGLSetBlobFuncANDROID is to remove libraspberrypi-dev from your sysroot and do a full rebuild. There will be some changes to the configure results, so please review them - if they are relevant for you - before proceeding with your work.

That got me unstuck! dpkg --purge libraspberrypi-dev in the sysroot, and we're back in the game.

While Qt5's build has proven extremely fragile, I was surprised that some customization from Raspberry Pi hadn't yet broken something. In the end, they didn't disappoint.

More i386 issues

The run now stops with a new 32bit issue related to v8 snapshots:

qt-everywhere-src-5.15.0/qtwebengine/src/core/release$ /usr/bin/g++ -pie -Wl,--fatal-warnings -Wl,--build-id=sha1 -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,-z,defs -Wl,--as-needed -m32 -pie -Wl,--disable-new-dtags -Wl,-O2 -Wl,--gc-sections -o "v8_snapshot/mksnapshot" -Wl,--start-group @"v8_snapshot/mksnapshot.rsp"  -Wl,--end-group  -ldl -lpthread -lrt -lz
/usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.so when searching for -lz
/usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.a when searching for -lz
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status

Attempted solution: apt install zlib1g-dev:i386.

Alternative solution (untried): configure Qt5 with -no-webengine-v8-snapshot.

It builds!

Installation paths

Now it tries to install files into debian/tmp/home/build/sysroot/opt/qt5custom-armhf/.

I realise that I now need to package the sysroot itself, both as a build-dependency of the Qt5 cross-compiler, and as a runtime dependency of the built cross-builder.

Conclusion

The current work in progress, patches, and all, is at https://github.com/Truelite/qt5custom/tree/master/debian-cross-qtwebengine

It blows my mind how ridiculously broken is the Qt5 cross-compiler build, for a use case that, looking at how many people are trying, seems to be one of the main ones for the cross-builder.

CryptogramAdversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Kevin RuddCNN: America, Britain and China

E&OE TRANSCRIPT
TELEVISION INTERVIEW
QUEST MEANS BUSINESS, CNN
22 JULY 2020

Richard Quest
Kevin Rudd, very good to see you. The strategy that China is now employing to put pressure — you’ve already seen the result of what the US sanctions on China has done — so now what happens with Australia?

Kevin Rudd
Well, it’s important, Richard, to understand what’s driving I think Chinese government decision making, not just on the Hong Kong question, but more broadly what’s happening with a number of other significant relationships which China has in the world. Since COVID-19 has hit, what many of us have observed is China actually doubling down hard in a nationalist direction in terms of a whole range of its external relationships, whether it’s with Canada, Australia, United Kingdom, but also over questions like Hong Kong, the South China Sea, Taiwan, and look at what most recently has happened on the China-India border. And so, therefore, we see a much more hardening Chinese response across the board. And it’s inevitable in my judgment, this is going to generate reactions of the type that we’ve seen in governments from Canberra to London to other points in between.

Richard Quest
But is China in danger of fighting on too many fronts? It’s got its enormous trade war with the United States. It’s now, of course, got the problems over Hong Kong, which will add more potential sanctions and tariffs to China. Now it’s got its row with the UK and of course its recent now with Australia. So what point in your view does China have to start rowing back?

Kevin Rudd
Well, it’s an important question for the Chinese leadership now in August, Richard, because in August they retreat east of Beijing for a month of high-level meetings amongst the extended central leadership. And a central question on the agenda for this upcoming set of meetings will be a) the state of the US-China relationship, which for them is central to everything, and b) the relationship with other principal countries like the UK and c) the unstated topic will be: has China gone too far? In Chinese strategic literature, there’s an expression just like you mentioned before, Richard, that is, it’s not sensible to fight on multiple fronts simultaneously. So there’s an internal debate in China at the moment about whether, in fact, the current strategy is the right one. And therefore the impact of this decision including the British decision most recently both the impending decision on Huawei and on Hong Kong will feed into that.

Richard Quest
But Kevin, whether it’s wise or not, and bearing in mind that China has enormous problems at home, it’s not as if President Xi has, by any means, an electorate or populace, I should say, more likely, that that’s entirely behind him. But he seems determined to prosecute these disagreements with other nations, whatever the cost, and I suggest to you that because he doesn’t have to face an electorate, like all the rest of them have to.

Kevin Rudd
But the bottom line is, however, Richard, is that you then see the economic impact of China being progressively, as it were, imperilled and its principal economic relationships abroad. The big debate in Beijing, for example, with the US-China trade war in the last two years has been: has China pushed too far in order to generate the magnitude of this American reaction? Parallel logic on Huawei, parallel logic in terms of the Hong Kong national security law. So your point goes to whether Xi Jinping is domestically immune from pressure? Well, yes, China is not a liberal democracy. We all know that. It never has been, at least since 1949 and for a long time before that, as well. But there are pressures within the Communist Party at a level of sheer pragmatism, which is: is this sustainable in terms of China’s economic interests? Remember 38% of the Chinese gross domestic product is generated through the traded sector of its economy. It has an unfolding balance of payments challenge and therefore, in terms of any potential financial sanctions coming out of the Hong Kong national security law from Washington in particular. China, therefore, experiences the economic impact, which then feeds into its domestic political debate within the Communist Party.

Journalist
Kevin Rudd joining us.

The post CNN: America, Britain and China appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Step too Var

Astor works for a company that provides software for research surveys. Someone needed a method to return a single control object from a list of control objects, so they wrote this C# code:

 
private ResearchControl GetResearchControlFromListOfResearchControls(int theIndex, 
    List<ResearchControl> researchControls)
{
    var result = new ResearchControl();
    result = researchControls[theIndex];
    return result;
}

Astor has a theory: “I can only guess the author was planning to return some default value in some case…”

I’m sorry, Astor, but you are mistaken. Honestly, if that were the case, I wouldn’t consider this much of a WTF at all, but here we have a subtle hint about deeper levels of ignorance, and it’s all encoded in that little var.

C# is strongly typed, but declaring the type for every variable is a pain, and in many cases, it’s redundant information. So C# lets you declare a variable with var, which does type interpolation. A var variable has a type, just instead of saying what it is, we just ask the compiler to figure it out from context.

But you have to give it that context, which means you have to declare and assign to the variable in a single step.

So, imagine you’re a developer who doesn’t know C# very well. Maybe you know some JavaScript, and you’re just trying to muddle through.

“Okay, I need a variable to hold the result. I’ll type var result. Hmm. Syntax error. Why?”

The developer skims through the code, looking for similar statements, and sees a var / new construct, and thinks, “Ah, that must be what I need to do!” So var result = new ResearchControl() appears, and the syntax error goes away.

Now, that doesn’t explain all of this code. There are still more questions, like: why not just return researchControls[index] or realize that, wait, you’re just indexing an array, so why not just not write a function at all? Maybe someone had some thoughts about adding exception handling, or returning a default value in cases where there wasn’t a valid entry in the array, but none of that ever happened. Instead, we just get this little artifact of someone who didn’t know better, and who wasn’t given any direction on how to do better.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianJunichi Uekawa: Joys of sshfs slave mode.

Joys of sshfs slave mode. When I want to have parts of my source tree on remote, I use sshfs slave mode, combined with emacs tramp things look very much integrated. sshfs interface only has obnoxious -o slave option which makes it talk to stdin/stdout, which needs to be connected to sftp-server from the local host. Using dpipe from vde2 seems to be a popular method to run the tool. Something like: dpipe /usr/lib/openssh/sftp-server = ssh hostname sshfs :/directory/to/be/shared ~/mnt/src -o slave I wish I can limit the visibility from sftp-server but maybe that's okay.

Krebs on SecurityTwitter Hacking for Profit and the LoLs

The New York Times last week ran an interview with several young men who claimed to have had direct contact with those involved in last week’s epic hack against Twitter. These individuals said they were only customers of the person who had access to Twitter’s internal employee tools, and were not responsible for the actual intrusion or bitcoin scams that took place that day. But new information suggests that at least two of them operated a service that resold access to Twitter employees for the purposes of modifying or seizing control of prized Twitter profiles.

As first reported here on July 16, prior to bitcoin scam messages being blasted out from such high-profile Twitter accounts @barackobama, @joebiden, @elonmusk and @billgates, several highly desirable short-character Twitter account names changed hands, including @L, @6 and @W.

A screenshot of a Discord discussion between the key Twitter hacker “Kirk” and several people seeking to hijack high-value Twitter accounts.

Known as “original gangster” or “OG” accounts, short-character profile names confer a measure of status and wealth in certain online communities, and such accounts can often fetch thousands of dollars when resold in the underground.

The people involved in obtaining those OG accounts on July 15 said they got them from a person identified only as “Kirk,” who claimed to be a Twitter employee. According to The Times, Kirk first reached out to the group through a hacker who used the screen name “lol” on OGusers, a forum dedicated to helping users hijack and resell OG accounts from Twitter and other social media platforms. From The Times’s story:

“The hacker ‘lol’ and another one he worked with, who went by the screen name ‘ever so anxious,’ told The Times that they wanted to talk about their work with Kirk in order to prove that they had only facilitated the purchases and takeovers of lesser-known Twitter addresses early in the day. They said they had not continued to work with Kirk once he began more high-profile attacks around 3:30 p.m. Eastern time on Wednesday.

‘lol’ did not confirm his real-world identity, but said he lived on the West Coast and was in his 20s. “ever so anxious” said he was 19 and lived in the south of England with his mother.

Kirk connected with “lol” late Tuesday and then “ever so anxious” on Discord early on Wednesday, and asked if they wanted to be his middlemen, selling Twitter accounts to the online underworld where they were known. They would take a cut from each transaction.”

Twice in the past year, the OGUsers forum was hacked, and both times its database of usernames, email addresses and private messages was leaked online. A review of the private messages for “lol” on OGUsers provides a glimpse into the vibrant market for the resale of prized OG accounts.

On OGUsers, lol was known to other members as someone who had a direct connection to one or more people working at Twitter who could be used to help fellow members gain access to Twitter profiles, including those that had been suspended for one reason or another. In fact, this was how lol introduced himself to the OGUsers community when he first joined.

“I have a twitter contact who I can get users from (to an extent) and I believe I can get verification from,” lol explained.

In a direct message exchange on OGUsers from November 2019, lol is asked for help from another OGUser member whose Twitter account had been suspended for abuse.

“hello saw u talking about a twitter rep could you please ask if she would be able to help unsus [unsuspend] my main and my friends business account will pay 800-1k for each,” the OGUusers profile inquires of lol.

Lol says he can’t promise anything but will look into it. “I sent her that, not sure if I will get a reply today bc its the weekend but ill let u know,” Lol says.

In another exchange, an OGUser denizen quizzes lol about his Twitter hookup.

“Does she charge for escalations? And how do you know her/what is her department/job. How do you connect with them if I may ask?”

“They are in the Client success team,” lol replies. “No they don’t charge, and I know them through a connection.”

As for how he got access to the Twitter employee, lol declines to elaborate, saying it’s a private method. “It’s a lil method, sorry I cant say.”

In another direct message, lol asks a fellow OGUser member to edit a comment in a forum discussion which included the Twitter account “@tankska,” saying it was his IRL (in real life) Twitter account and that he didn’t want to risk it getting found out or suspended (Twitter says this account doesn’t exist, but a simple text search on Twitter shows the profile was active until late 2019).

“can u edit that comment out, @tankska is a gaming twitter of mine and i dont want it to be on ogu :D’,” lol wrote. “just dont want my irl getting sus[pended].”

Still another OGUser member would post lol’s identifying information into a forum thread, calling lol by his first name — “Josh” — in a post asking lol what he might offer in an auction for a specific OG name.

“Put me down for 100, but don’t note my name in the thread please,” lol wrote.

WHO IS LOL?

The information in lol’s OGUsers registration profile indicates he was probably being truthful with The Times about his location. The hacked forum database shows a user “tankska” registered on OGUsers back in July 2018, but only made one post asking about the price of an older Twitter account for sale.

The person who registered the tankska account on OGUsers did so with the email address jperry94526@gmail.com, and from an Internet address tied to the San Ramon Unified School District in Danville, Calif.

According to 4iq.com, a service that indexes account details like usernames and passwords exposed in Web site data breaches, the jperry94526 email address was used to register accounts at several other sites over the years, including one at the apparel store Stockx.com under the profile name Josh Perry.

Tankska was active only briefly on OGUsers, but the hacked OGUsers database shows that “lol” changed his username three times over the years. Initially, it was “freej0sh,” followed by just “j0sh.”

lol did not respond to requests for comment sent to email addresses tied to his various OGU profiles and Instagram accounts.

ALWAYS IN DISCORD

Last week’s story on the Twitter compromise noted that just before the bitcoin scam tweets went out, several OG usernames changed hands. The story traced screenshots of Twitter tools posted online back to a moniker that is well-known in the OGUsers circle: PlugWalkJoe, a 21-year-old from the United Kingdom.

Speaking with The Times, PlugWalkJoe — whose real name is Joseph O’Connor — said while he acquired a single OG Twitter account (@6) through one of the hackers in direct communication with Kirk, he was otherwise not involved in the conversation.

“I don’t care,” O’Connor told The Times. “They can come arrest me. I would laugh at them. I haven’t done anything.”

In an interview with KrebsOnSecurity, O’Connor likewise asserted his innocence, suggesting at least a half dozen other hacker handles that may have been Kirk or someone who worked with Kirk on July 15, including “Voku,” “Crim/Criminal,” “Promo,” and “Aqua.”

“That twit screenshot was the first time in a while I joke[d], and evidently I shouldn’t have,” he said. “Joking is what got me into this mess.”

O’Connor shared a number of screenshots from a Discord chat conversation on the day of the Twitter hack between Kirk and two others: “Alive,” which is another handle used by lol, and “Ever So Anxious.” Both were described by The Times as middlemen who sought to resell OG Twitter names obtained from Kirk. O’Connor is referenced in these screenshots as both “PWJ” and by his Discord handle, “Beyond Insane.”

The negotiations over highly-prized OG Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams.

Ever So Anxious told Kirk his OGU nickname was “Chaewon,” which corresponds to a user in the United Kingdom. Just prior to the Twitter compromise, Chaewon advertised a service on the forum that could change the email address tied to any Twitter account for around $250 worth of bitcoin. O’Connor said Chaewon also operates under the hacker alias “Mason.”

“Ever So Anxious” tells Kirk his OGUsers handle is “Chaewon,” and asks Kirk to modify the display names of different OG Twitter handles to read “lol” and “PWJ”.

At one point in the conversation, Kirk tells Alive and Ever So Anxious to send funds for any OG usernames they want to this bitcoin address. The payment history of that address shows that it indeed also received approximately $180,000 worth of bitcoin from the wallet address tied to the scam messages tweeted out on July 15 by the compromised celebrity accounts.

The Twitter hacker “Kirk” telling lol/Alive and Chaewon/Mason/Ever So Anxious where to send the funds for the OG Twitter accounts they wanted.

SWIMPING

My July 15 story observed there were strong indications that the people involved in the Twitter hack have connections to SIM swapping, an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

The account “@shinji,” a.k.a. “PlugWalkJoe,” tweeting a screenshot of Twitter’s internal tools interface.

SIM swapping was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As recounted by Wired.com, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

Immediately after Jack Dorsey’s Twitter handle was hijacked, the hackers tweeted out several shout-outs, including one to @PlugWalkJoe. O’Connor told KrebsOnSecurity he has never been involved in SIM swapping, although that statement was contradicted by two law enforcement sources who closely track such crimes.

However, Chaewon’s private messages on OGusers indicate that he very much was involved in SIM swapping. Use of the term “SIM swapping” was not allowed on OGusers, and the forum administrators created an automated script that would watch for anyone trying to post the term into a private message or discussion thread.

The script would replace the term with “I do not condone illegal activities.” Hence, a portmanteau was sometimes used: “Swimping.”

“Are you still swimping?” one OGUser member asks of Chaewon on Mar. 24, 2020. “If so and got targs lmk your discord.” Chaewon responds in the affirmative, and asks the other user to share his account name on Wickr, an encrypted online messaging app that automatically deletes messages after a few days.

Chaewon/Ever So Anxious/Mason did not respond to requests for comment.

O’Connor told KrebsOnSecurity that one of the individuals thought to be associated with the July 15 Twitter hack — a young man who goes by the nickname “Voku” — is still actively involved in SIM-swapping, particularly against customers of AT&T and Verizon.

Voku is one of several hacker handles used by a Canton, Mich. youth whose mom turned him in to the local police in February 2018 when she overheard him talking on the phone and pretending to be an AT&T employee. Officers responding to the report searched the residence and found multiple cell phones and SIM cards, as well as files on the kid’s computer that included “an extensive list of names and phone numbers of people from around the world.”

The following month, Michigan authorities found the same individual accessing personal consumer data via public Wi-Fi at a local library, and seized 45 SIM cards, a laptop and a Trezor wallet — a hardware device designed to store crytpocurrency account data. In April 2018, Voku’s mom again called the cops on her son — identified only as confidential source #1 (“CS1”) in the criminal complaint against him — saying he’d obtained yet another mobile phone.

Voku’s cooperation with authorities led them to bust up a conspiracy involving at least nine individuals who stole millions of dollars worth of cryptocurrency and other items of value from their targets.

CONSPIRACY

Samy Tarazi, an investigator with the Santa Clara County District Attorney’s Office, has spent hundreds of hours tracking young hackers during his tenure with REACT, a task force set up to combat SIM swapping and bring SIM swappers to justice.

According to Tarazi, multiple actors in the cybercrime underground are constantly targeting people who work in key roles at major social media and online gaming platforms, from Twitter and Instagram to Sony, Playstation and Xbox.

Tarazi said some people engaged in this activity seek to woo their targets, sometimes offering them bribes in exchange for the occasional request to unban or change the ownership of specific accounts.

All too often, however, employees at these social media and gaming platforms find themselves the object of extremely hostile and persistent personal attacks that threaten them and their families unless and until they give in to demands.

“In some cases, they’re just hitting up employees saying, ‘Hey, I’ve got a business opportunity for you, do you want to make some money?'” Tarazi explained. “In other cases, they’ve done everything from SIM swapping and swatting the victim many times to posting their personal details online or extorting the victims to give up access.”

Allison Nixon is chief research officer at Unit 221B, a cyber investigations company based in New York. Nixon says she doesn’t buy the idea that PlugWalkJoe, lol, and Ever So Anxious are somehow less culpable in the Twitter compromise, even if their claims of not being involved in the July 15 Twitter bitcoin scam are accurate.

“You have the hackers like Kirk who can get the goods, and the money people who can help them profit — the buyers and the resellers,” Nixon said. “Without the buyers and the resellers, there is no incentive to hack into all these social media and gaming companies.”

Mark Rasch, Unit 221B’s general counsel and a former U.S. federal prosecutor, said all of the players involved in the Twitter compromise of July 15 can be charged with conspiracy, a legal concept in the criminal statute which holds that any co-conspirators are liable for the acts of any other co-conspirator in furtherance of the crime, even if they don’t know who those other people are in real life or what else they may have been doing at the time.

“Conspiracy has been called the prosecutor’s friend because it makes the agreement the crime,” Rasch said. “It’s a separate crime in addition to the underlying crime, whether it be breaking in to a network, data theft or account takeover. The ‘I just bought some usernames and gave or sold them to someone else’ excuse is wrong because it’s a conspiracy and these people obviously don’t realize that.”

In a statement on its ongoing investigation into the July 15 incident, Twitter said it resulted from a small number of employees being manipulated through a social engineering scheme. Twitter said at least 130 accounts were targeted by the attackers, who succeeded in sending out unauthorized tweets from 45 of them and may have been able to view additional information about those accounts, such as direct messages.

On eight of the compromised accounts, Twitter said, the attackers managed to download the account history using the Your Twitter Data tool. Twitter added that it is working with law enforcement and is rolling out additional company-wide training to guard against social engineering tactics.

CryptogramFawkes: Digital Image Cloaking

Fawkes is a system for manipulating digital images so that they aren't recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

Planet DebianBits from Debian: Let's celebrate DebianDay 2020 around the world

We encourage our community to celebrate around the world the 27th Debian anniversary with organized DebianDay events. This year due to the COVID-19 pandemic we cannot organize in-person events, so we ask instead that contributors, developers, teams, groups, maintainers, and users promote The Debian Project and Debian activities online on August 16th (and/or 15th).

Communities can organize a full schedule of online activities throughout the day. These activities can include talks, workshops, active participation with contributions such as translations assistance or editing, debates, BoFs, and all of this in your local language using tools such as Jitsi for capturing audio and video from presenters for later streaming to YouTube.

If you are not aware of any local community organizing a full event or you don't want to join one, you can solo design your own activity using OBS and stream it to YouTube. You can watch an OBS tutorial here.

Don't forget to record your activity as it will be a nice idea to upload it to Peertube later.

Please add your event/activity on the DebianDay wiki page and let us know about and advertise it on Debian micronews. To share it, you have several options:

  • Follow the steps listed here for Debian Developers.
  • Contact us using IRC in channel #debian-publicity on the OFTC network, and ask us there.
  • Send a mail to debian-publicity@lists.debian.org and ask for your item to be included in micronews. This is a publicly archived list.

PS: DebConf20 online is coming! It will be held from August 23rd to 29th, 2020. Registration is already open.

Planet DebianEnrico Zini: nc | sudo

Question: what does this command do?

# Don't do this
nc localhost 12345 | sudo tar xf -

Answer: it sends the password typed into sudo to the other endpoint of netcat.

I can reproduce this with both nc.traditional and nc.openbsd.

One might be tempted to just put sudo in front of everything, but it'll mean that only nc will run as root:

# This is probably not what you want
sudo nc localhost 12345 | tar xf -

The fix that I will never remember, thanks to twb on IRC, is to close nc's stdin:

<&- nc localhost 12345 | sudo tar xf -

Or flip the table and just use sudo -s:

$ sudo -s
# nc localhost 12345 | tar xf -

Updates

Harald Koenig suggested two alternative spellings that might be easier to remember:

nc localhost 12345 < /dev/null | sudo tar xf -
< /dev/null nc localhost 12345 | sudo tar xf -

And thinking along those lines, there could also be the disappointed face variant:

:| nc localhost 12345 | sudo tar xf -

Matthias Urlichs suggested the approach of precaching sudo's credentials, making the rest of the command lines more straightforward (and TIL: sudo id):

sudo id
nc localhost 12345 | sudo tar xf -

Or even better:

sudo id && nc localhost 12345 | sudo tar xf -

Shortcomings of nc | tar

Tomas Janousek commented:

There's one more problem with a plain tar | nc | tar without redirection or extra parameteres: it doesn't know when to stop. So the best way to use it, I believe, is:

tar c | nc -N

nc -d | tar x

The -N option terminates the sending end of the connection, and the -d option tells the receiving netcat to never read any input. These two parameters, I hope, should also fix your sudo/password problem.

Hope it helps!

Worse Than FailureScience Is Science

Oil well

Bruce worked for a small engineering consultant firm providing custom software solutions for companies in the industrial sector. His project for CompanyX involved data consolidation for a new oil well monitoring system. It was a two-phased approach: Phase 1 was to get the raw instrument data into the cloud, and Phase 2 was to aggregate that data into a useful format.

Phase 1 was completed successfully. When it came time to write the business logic for aggregating the data, CompanyX politely informed Bruce's team that their new in-house software team would take over from here.

Bruce and his team smelled trouble. They did everything they could think of to persuade CompanyX not to go it alone when all the expertise rested on their side. However, CompanyX was confident they could handle the job, parting ways with handshakes and smiles.

Although Phase 2 was officially no longer on his plate, Bruce had a suspicion borne from experience that this wasn't the last he'd hear from CompanyX. Sure enough, a month later he received an urgent support request via email from Rick, an electrical engineer.

We're having issues with our aggregated data not making it into the database. Please help!!

Rick Smith
LEAD SOFTWARE ENGINEER

"Lead Software Engineer!" Bruce couldn't help repeating out loud. Sadly, he'd seen this scenario before with other clients. In a bid to save money, their management would find the most sciency people on their payroll and would put them in charge of IT or, worse, programming.

Stifling a cringe, Bruce dug deeper into the email. Rick had written a Python script to read the raw instrument data, aggregate it in memory, and re-insert it into a table he'd added to the database. Said script was loaded with un-parameterized queries, filters on non-indexed fields, and SELECT * FROM queries. The aggregation logic was nothing to write home about, either. It was messy, slow, and a slight breeze could take it out. Bruce fired up the SQL profiler and found a bigger issue: a certain query was failing every time, throwing the error Cannot insert the value NULL into column 'requests', table 'hEvents'; column does not allow nulls. INSERT fails.

Well, that seemed straightforward enough. Bruce replied to Rick's email, asking if he knew about the error.

Rick's reply came quickly, and included someone new on the email chain. Yes, but we couldn't figure it out, so we were hoping you could help us. Aaron is our SQL expert and even he's stumped.

Product support was part of Bruce's job responsibilities. He helpfully pointed out the specific query that was failing and described how to use the SQL profiler to pinpoint future issues.

Unfortunately, CompanyX's crack new in-house software team took this opportunity to unload every single problem they were having on Bruce, most of them just as basic or even more basic than the first. The back-and-forth email chain grew to epic proportions, and had less to do with product support than with programming education. When Bruce's patience finally gave out, he sent Rick and Aaron a link to the W3 schools SQL tutorial page. Then he talked to his manager. Agreeing that things had gotten out of hand, Bruce's manager arranged for a BA to contact CompanyX to offer more formal assistance. A teleconference was scheduled for the next week, which Bruce and his manager would also be attending.

When the day of the meeting came, Bruce and his associates dialed in—but no one from CompanyX did. After some digging, they learned that the majority of CompanyX's software team had been fired or reassigned. Apparently, the CompanyX project manager had been BCC'd on Bruce's entire email chain with Rick and Aaron. Said PM had decided a new new software team was in order. The last Bruce heard, the team was still "getting organized." The fate of Phase 2 remains unknown.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianAndrew Cater: How to use jigdo to download media images

I worked with the CD release team over the weekend for the final release of Debian Stretch. One problem: we have media images which we cannot test because the team does not have the hardware. I asked questions on the debian-cd mailing list about the future of these and various other .iso images.

Could we replace some DVDs and larger files with smaller jigdo files so that people can download files to build the DVD locally?

People asked me:
  • How do you actually use jigdo to produce a usable media image? 
  • What advantages does jigdo bring over just downloading a large .iso image?
Why jigdo?
  • Downloading large files on a slow or lossy link is difficult.
  • Downloading large (several GB) files via http can be unreliable.
  • Jigdo can be quicker than trying to download one large file and failing.
  • There are few CD mirrors worldwide: jigdo can use a local Debian mirror.
  • The transport mechanism is http - no need for a particular port to be opened.
Using jigdo

Jigdo uses information from a template file to reconstruct an .iso file by downloading Debian packages from a mirror. The image is checksummed and verified at the end of the complete download. if the download is interrupted, you can import the previously downloaded part of the file.

It's a command line application - the GUI never really happened - but it is fairly easy to use.  apt install jigdo-file then find the .jigdo file and .template files that you need for the image from a CD mirror: https://cdimage.debian.org/debian-cd/current/amd64/jigdo-cd/

To build the netinst CD for AMD64, for example: you need the .jigdo file as a minimum: debian-10.4.0-amd64-netinst.jigdo

If you only have this file, jigdo-lite will download the template later but you can save the template in the same directory and save time. The jigdo file is only 25k or so and the template is 4.6M rather than 336M. I copied them into my home directory to build there. The process does not need root permissions.

Run the command jigdo-lite This prompts you for a .jigdo file to use. By default, this uses http to fetch the file from a distant webserver.
(If the files are local, you can use the file:/// syntax.For example: file:///home/amacater/debian-10.4.0-amd64-netinst.jigdo)

jigdo-lite then reads the .jigdo file and outputs some information about the .iso
It offers the chance to reload any failed download, then prompts for a mirror name. The download pulls in small numbers of files at a time, saves to a temporary directory and will checksum the eventual .iso file.

This will work for any larger file including the 16GB .iso distributed only as a .jigdo

For i386 and AMD, the images are bootable when copied to a USB stick. Use dd to write them and verify the copy.
  • Plug in a USB that can be overwritten.
  • Use dmesg as root to work out which device this is.
  • Change to the directory in which you have your .iso image.
  • Write the image to the stick in 4M blocks and display progress with the syntax of the command below (all one line if wrapped).

dd if=debian-10.4.0-amd64-netinst.iso of=/dev/sdX obs=4M oflag=sync status=progress




Planet DebianJonathan Dowland: FlashFloppy OLED display

This is the tenth part in a series of blog posts. The previous post was Amiga floppy recovery project: what next?. The whole series is available here: Amiga.

Rotary encoder, OLED display and mount

Rotary encoder, OLED display and mount

I haven't made any substantive progress on my Amiga floppy recovery project for a while, but I felt like some retail therapy a few days ago so I bought a rotary encoder and OLED display for the Gotek floppy disk emulator along with a 3D-printed mount for them. I'm pleased with the results! The rather undescriptive "DSKA0001" in the picture is a result of my floppy image naming scheme: the display is capable of much more useful labels such as "Lemmings", "Deluxe Paint IV", etc.

The Gotek and all the new bits can now be moved inside the Amiga A500's chassis.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2020)

The following contributors got their Debian Developer accounts in the last two months:

  • Richard Laager (rlaager)
  • Thiago Andrade Marques (andrade)
  • Vincent Prat (vivi)
  • Michael Robin Crusoe (crusoe)
  • Jordan Justen (jljusten)
  • Anuradha Weeraman (anuradha)
  • Bernelle Verster (indiebio)
  • Gabriel F. T. Gomes (gabriel)
  • Kurt Kremitzki (kkremitzki)
  • Nicolas Mora (babelouest)
  • Birger Schacht (birger)
  • Sudip Mukherjee (sudip)

The following contributors were added as Debian Maintainers in the last two months:

  • Marco Trevisan
  • Dennis Braun
  • Stephane Neveu
  • Seunghun Han
  • Alexander Johan Georg Kjäll
  • Friedrich Beckmann
  • Diego M. Rodriguez
  • Nilesh Patra
  • Hiroshi Yokota

Congratulations!

CryptogramHacking a Power Supply

This hack targets the firmware on modern power supplies. (Yes, power supplies are also computers.)

Normally, when a phone is connected to a power brick with support for fast charging, the phone and the power adapter communicate with each other to determine the proper amount of electricity that can be sent to the phone without damaging the device­ -- the more juice the power adapter can send, the faster it can charge the phone.

However, by hacking the fast charging firmware built into a power adapter, Xuanwu Labs demonstrated that bad actors could potentially manipulate the power brick into sending more electricity than a phone can handle, thereby overheating the phone, melting internal components, or as Xuanwu Labs discovered, setting the device on fire.

Research paper, in Chinese.

Planet DebianEnrico Zini: Screen ghosts

I noticed an odd effect, that reminds me of screen ghosting on old CRT monitors, when my laptop screen is locked:

Those faint white vertical lines one can see, are actually window borders, and the lock screen is leaking contents of my unlocked desktop! Here is the same screen, unlocked:

However, moving the windows around does not reflect on the ghost image on top of the lock screen: reshuffling windows then locking, produces always the same ghost image. The white border reflects where the window has been at some time in the past.

The lock screen does not seem to be responsible, either: dragging a solid colored window on the laptop screen has the same effect:

But taking a screenshot of it does not show the time traveling ghost windows:

This is happening on two different laptops, an HP EliteBook x360 G1, and a Lenovo ThinkPad X240, one that I've been using since 3 years, one that I've been using since a week, and whose only thing in common is a 1920x1080 IPS screen and an Intel GPU.

I have no idea where to start debugging this. Please reach out to me at enrico@debian.org if any of this makes sense to you.

Update: Jim Paris pointed me to https://en.wikipedia.org/wiki/Image_persistence which looks pretty much like what is happening here.

Jim Paris also pointed out that a black background doesn't show the ghosting.

I changed the lock screen background by editing /etc/lightdm/lightdm-gtk-greeter.conf and adding background=#000000 to the [greeter] section, to limit information leakage through ghosting in the lock screen.

Kevin RuddAFR: Seven Questions Morrison Must Answer

On the eve the government’s much-delayed financial statement, it’s time for some basic questions about Australia’s response.

The uncomfortable truth is that we are still in the economic equivalent of ‘‘the phoney war’’ between September 1939 and May 1940. Our real problem is not now but the fourth quarter of this year, and next year, by when temporary measures will have washed through, while globally the real economy will still be wrecked. But there’s no sign yet of a long-term economic strategy, centred on infrastructure, to rebuild business confidence to start investing and re-employing people.

So while Scott Morrison may look pleased with himself (after months of largely uncritical media, a Parliament that barely meets and a delayed budget) it’s time for some intellectual honesty in what passes for our public policy debate. So here are seven questions for Scotty to answer.

First, the big one. It’s well past time to come fully clean on the two dreaded words of Australian politics: debt and deficit. How on earth can Morrison’s Liberal Party and its coalition partner, the Murdoch party, justify their decade-long assault on public expenditure and investment in response to an existential financial and economic crisis?

Within nine months of taking office, we had to deal with a global financial crisis that threatened our banks, while avoiding mass unemployment. We avoided economic and social disaster by … borrowing. In total, we expended $88 billion, taking our federal net debt to 13 per cent of GDP by 2014 – while still sustaining our AAA credit rating.

Four months into the current crisis, Morrison has so far allocated $259 billion, resulting in a debt-to-GDP ratio of about 50 per cent and rising. We haven’t avoided recession – partly because Morrison had to be dragged kicking and screaming late to the stimulus table. He ignored Reserve Bank and Treasury advice to act earlier because it contradicted the Liberals’ political mantra of getting ‘‘back in black’’.

On debt and deficit, this emperor has no clothes. Indeed, the gargantuan nature of this stimulus strategy has destroyed the entire edifice of Liberal ideology and politics. No wonder Scotty from Marketing now talks of us being ‘‘beyond ideology’’: he no longer has one. He’s adopted social democracy instead, including the belated rediscovery that the agency of the state is essential in the economy, public health and broadband. Where would we be on online delivery of health, education and business Zoom services in the absence of our NBN, despite the Liberals botching its final form?

So Morrison and the Murdoch party should just admit their dishonest debt-anddeficit campaign was bullshit all along – a political myth manufactured to advance the proposition that Labor governments couldn’t manage the economy.

Then there’s Morrison’s claim that his mother-of-all-stimulus-strategies, unlike ours, is purring like a well-oiled machine without a wasted dollar. What about the monumental waste of paying $19,500 to young people who were previously working only part-time for less than half that amount? All part of a $130 billion program that suddenly became $70 billion after a little accounting error (imagine the howls of ‘‘incompetence’’ had we done that).

And let’s not forget the eerie silence surrounding the $40 billion ‘‘loans program’’ to businesses. If that’s administered with anything like the finesse we’ve seen with the $100 million sports rorts affair, heaven help the Auditor-General. Then there’s Stuart ‘‘Robodebt’’ Robert and the rolling administrative debacle that is Centrelink. Public administration across the board is just rolling along tickety-boo.

Third, the $30 billion snatch-and-grab raid (so far) on superannuation balances is the most financially irresponsible assault on savings since Federation. Paul Keating built a $3.1 trillion national treasure. I added to it by lifting the super guarantee from 9 per cent to 12 per cent, which the Liberals are seeking to wreck. The long-term damage this will do to the fiscal balance (age pensions), the balance of payments and our credit rating is sheer economic vandalism.

Fourth, industry policy. Yes to bailouts for regional media, despite Murdoch using COVID-19 to kill more than 100 local and regional papers nationwide. But no JobKeeper for the universities, one of our biggest export industries. Why? Ideology! The Liberals hate universities because they worry educated people become lefties. It’s like the Liberals killing off Australian car manufacturing because they hated unions, despite the fact our industry was among the least subsidised in the world.

Fifth, Morrison proclaimed an automatic ‘‘snapback’’ of his capital-S stimulus strategy after six months to avoid the ‘‘mistakes’’ of my government in allowing ours to taper out over two years. Looks like Scotty has been mugged by reality again. Global recessions have a habit of ignoring domestic political fiction.

Sixth, infrastructure. For God’s sake, we should be using near-zero interest rates to deploy infrastructure bonds and invest in our economic future. Extend the national transmission grid to accommodate industrial-scale solar. Admit the fundamental error of abandoning fibre for copper broadband and complete the NBN as planned. The future global economy will become more digital, not less. Use Infrastructure Australia (not the Nationals) to advise on the cost benefit of each.

Finally, there is trade – usually 43 per cent of our GDP. Global trade is collapsing because of the pandemic and Trumpian protectionism. Yet nothing from the government on forging a global free-trade coalition. Yes, the China relationship is hard. But the government’s failure to prosecute an effective China strategy is now compounding our economic crisis. And, outrageously, the US is moving in on our barley and beef markets. Trade policy is a rolled-gold mess.

So far Morrison’s government, unlike mine, has had unprecedented bipartisan support from the opposition. But public trust is hanging by a thread. It’s time for Morrison to get real with these challenges, not just spin us a line. Ultimately, the economy does not lie.

Kevin Rudd was the 26th prime minister of Australia.

First published in the Australian Financial Review on 21 July 2020.

The post AFR: Seven Questions Morrison Must Answer appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Dropped Pass

A charitable description of Java is that it’s a strict language, at least in terms of how it expects you to interact with types and definitions. That strictness can create conflict when you’re interacting with less strict systems, like JSON data.

Tessie produces data as a JSON API that wraps around sensing devices which report a numerical value. These sensors, as far as we care for this example, come in two flavors: ones that report a maximum recorded value, and ones which don’t. Something like:

  {
    dataNoMax: [
      {name: "sensor1", value: 20, max: 0} 
    ],
    dataWithMax: [
      {name: "sensor2", value: 25, max: 50 }
    ]
  }

By convention, the API would report max: 0 for all the devices which didn’t have a max.

With that in mind, they designed their POJOs like this:

  class Data {
    String name;
    int value;
    int max;
  }

  class Readings {
    List<Data> dataNoMax;
    List<Data> dataWithMax;
  }

These POJOs would be used both on the side producing the data, and in the client libraries for consuming the data.

Of course, by JSON convention, including a field that doesn’t actually hold a meaningful value is a bad idea- max: 0 should either be max: null, or better yet, just excluded from the output entirely.

So one of Tessie’s co-workers hacked some code into the JSON serializer to conditionally include the max field in the output.

QA needed to validate that this change was correct, so they needed to implement some automated tests. And this is where the problems started to crop up. The developer hadn’t changed the implementation of the POJOs, and they were using int.

For all that Java has a reputation as “everything’s an object”, a few things explicitly aren’t: primitive types. int is a primitive integer, while Integer is an object integer. Integers are references. ints are not. An Integer could be null, but an int cannot ever be null.

This meant if QA tried to write a test assertion that looked like this:

assertThat(readings.dataNoMax[0].getMax()).isNull()

it wouldn’t work. max could never be null.

There are a few different ways to solve this. One could make the POJO support nullable types, which is probably a better way to represent an object which may not have a value for certain fields. An int in Java that isn’t initialized to a value will default to zero, so they probably could have left their last unit test unchanged and it still would have passed. But this was a code change, and a code change needs to have a test change to prove the code change was correct.

Let’s compare versions. Here was their original test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertEquals(50, readings.dataWithMax[0].getMax());
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
assertEquals(0, readings.dataNoMax[0].getMax());
assertEquals(20, readings.dataNoMax[0].getValue());

And, since the code changed, and they needed to verify that change, this is their new test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertThat(readings.dataWithMax[0].getMax()).isNotNull()
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
//assertThat(readings.dataNoMax[0].getMax()).isNull();
assertEquals(20, readings.dataNoMax[0].getValue());

So, their original test compared strictly against values. When they needed to test if values were present, they switched to using an isNotNull comparison. On the side with a max, this test will always pass- it can’t possibly fail, because an int can’t possibly be null. When they tried to do an isNull check, on the other value, that always failed, because again- it can’t possibly be null.

So they commented it out.

Test is green. Clearly, this code is ready to ship.

Tessie adds:

[This] is starting to explain why our git history is filled with commits that “fix failing test” by removing all the asserts.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 10)

Here’s part ten of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet DebianEvgeni Golov: Building and publishing documentation for Ansible Collections

I had a draft of this article for about two months, but never really managed to polish and finalize it, partially due to some nasty hacks needed down the road. Thankfully, one of my wishes was heard and I had now the chance to revisit the post and try a few things out. Sadly, my wish was granted only partially and the result is still not beautiful, but read yourself ;-)

UPDATE: I've published a follow up post on building documentation for Ansible Collections using antsibull, as my wish was now fully granted.

As part of my day job, I am maintaining the Foreman Ansible Modules - a collection of modules to interact with Foreman and its plugins (most notably Katello). We've been maintaining this collection (as in set of modules) since 2017, so much longer than collections (as in Ansible Collections) existed, but the introduction of Ansible Collections allowed us to provide a much easier and supported way to distribute the modules to our users.

Now users usually want two things: features and documentation. Features are easy, we already have plenty of them. But documentation was a bit cumbersome: we had documentation inside the modules, so you could read it via ansible-doc on the command line if you had the collection installed, but we wanted to provide online readable and versioned documentation too - something the users are used to from the official Ansible documentation.

Building HTML from Ansible modules

Ansible modules contain documentation in form of YAML blocks documenting the parameters, examples and return values of the module. The Ansible documentation site is built using Sphinx from reStructuredText. As the modules don't contain reStructuredText, Ansible hashad a tool to generate it from the documentation YAML: build-ansible.py document-plugins. The tool and the accompanying libraries are not part of the Ansible distribution - they just live in the hacking directory. To run them we need a git checkout of Ansible and source hacking/env-setup to set PYTHONPATH and a few other variables correctly for Ansible to run directly from that checkout.

It would be nice if that'd be a feature of ansible-doc, but while it isn't, we need to have a full Ansible git checkout to be able to continue.The tool has been recently split out into an own repository/distribution: antsibull. However it was also a bit redesigned to be easier to use (good!), and my hack to abuse it to build documentation for out-of-tree modules doesn't work anymore (bad!). There is an issue open for collections support, so I hope to be able to switch to antsibull soon.

Anyways, back to the original hack.

As we're using documentation fragments, we need to tell the tool to look for these, because otherwise we'd get errors about not found fragments. We're passing ANSIBLE_COLLECTIONS_PATHS so that the tool can find the correct, namespaced documentation fragments there. We also need to provide --module-dir pointing at the actual modules we want to build documentation for.

ANSIBLEGIT=/path/to/ansible.git
source ${ANSIBLEGIT}/hacking/env-setup
ANSIBLE_COLLECTIONS_PATHS=../build/collections python3 ${ANSIBLEGIT}/hacking/build-ansible.py document-plugins --module-dir ../plugins/modules --template-dir ./_templates --template-dir ${ANSIBLEGIT}/docs/templates --type rst --output-dir ./modules/

Ideally, when antsibull supports collections, this will become antsibull-docs collection … without any need to have an Ansible checkout, sourcing env-setup or pass tons of paths.

Until then we have a Makefile that clones Ansible, runs the above command and then calls Sphinx (which provides a nice Makefile for building) to generate HTML from the reStructuredText.

You can find our slightly modified templates and themes in our git repository in the docs directory.

Publishing HTML documentation for Ansible Modules

Now that we have a way to build the documentation, let's also automate publishing, because nothing is worse than out-of-date documentation!

We're using GitHub and GitHub Actions for that, but you can achieve the same with GitLab, TravisCI or Jenkins.

First, we need a trigger. As we want always up-to-date documentation for the main branch where all the development happens and also documentation for all stable releases that are tagged (we use vX.Y.Z for the tags), we can do something like this:

on:
  push:
    tags:
      - v[0-9]+.[0-9]+.[0-9]+
    branches:
      - master

Now that we have a trigger, we define the job steps that get executed:

    steps:
      - name: Check out the code
        uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: make doc-setup
      - name: Build docs
        run: make doc

At this point we will have the docs built by make doc in the docs/_build/html directory, but not published anywhere yet.

As we're using GitHub anyways, we can also use GitHub Pages to host the result.

      - uses: actions/checkout@v2
      - name: configure git
        run: |
          git config user.name "${GITHUB_ACTOR}"
          git config user.email "${GITHUB_ACTOR}@bots.github.com"
          git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/*
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: make doc-setup
      - name: Build docs
        run: make doc
      - name: commit docs
        run: |
          git checkout gh-pages
          rm -rf $(basename ${GITHUB_REF})
          mv docs/_build/html $(basename ${GITHUB_REF})
          dirname */index.html | sort --version-sort | xargs -I@@ -n1 echo '<div><a href="@@/"><p>@@</p></a></div>' >> index.html
          git add $(basename ${GITHUB_REF}) index.html
          git commit -m "update docs for $(basename ${GITHUB_REF})" || true
      - name: push docs
        run: git push origin gh-pages

As this is not exactly self explanatory:

  1. Configure git to have a proper author name and email, as otherwise you get ugly history and maybe even failing commits
  2. Fetch all branch names, as the checkout action by default doesn't do this.
  3. Setup Python, Sphinx, Ansible etc.
  4. Build the documentation as described above.
  5. Switch to the gh-pages branch from the commit that triggered the workflow.
  6. Remove any existing documentation for this tag/branch ($GITHUB_REF contains the name which triggered the workflow) if it exists already.
  7. Move the previously built documentation from the Sphinx output directory to a directory named after the current target.
  8. Generate a simple index of all available documentation versions.
  9. Commit all changes, but don't fail if there is nothing to commit.
  10. Push to the gh-pages branch which will trigger a GitHub Pages deployment.

Pretty sure this won't win any beauty contest for scripting and automation, but it gets the job done and nobody on the team has to remember to update the documentation anymore.

You can see the results on theforeman.org or directly on GitHub.

Planet DebianSteinar H. Gunderson: Reverse-engineering the FIRST marathon program

Last year, I ran my first marathon ever (at 3:07:52, in the fairly hilly Oslo course), using the FIRST marathon program (which, despite the name, is not necessarily meant for beginners). This year, as the Covid-19 lockdowns started, I decided to go for it again using the same program, but there was one annoyance; I wanted to change target times as it became obvious my initial target got too easy, but there's no way to calculate it electronically.

FIRST comes in the form of a book; you can find an older version of the 10K and marathon programs if you search a bit online, but fundamentally, the way it works is that you declare a 5K personal best (whether true or not), look up a bunch of tempos in a table in the book from that, and then use that to get three runs every week. (You also do cross-training and strength training, or at least that's the idea.) For instance, the book might say that this week's track intervals are 6x 800m, so you go look up your 800m interval times in the table. If you have a 5K PB of 19:30, the book might say that 800m interval times are 2:52 (3:35/km), so off you go running.

The tables are never explained, and they don't match up with the formulas that were published in the earlier versions. There's at least one running calculator that can derive FIRST paces, but it defaults to miles and has a different calculation for marathon pace (which sometimes creates absurd situations like “long tempo” being slower than “marathon pace”), so I really wanted to just get the formulas to input into my own spreadsheets.

Enter regression. I just typed in a bunch of the tables, graphed them, saw that everything was on a dead straight line (R=1.00 for linear regression) and got the constants from there. So without further ado:

If you can run 5K at x seconds per kilometer, the Holy Gospel of FIRST declares that you can run 42.195K at 1.15313x seconds. (I am sure there are more sophisticated models, but perhaps this is good enough?) Incidentally or not, this means an 18:30 5K becomes nearly exactly three hours on a marathon (only two seconds away). (I didn't bother with the 10K and half-marathon estimation paces; there are so many numbers to input).

The tempo run paces are even simpler. Take your 5K pace, and add 10 sec/km, and that's short tempo (ST). Medium tempo (MT) is 5K + 20 sec/km. Long tempo (LT) is 5K + 29 sec/km.

That leaves only the track repeats. For this, first take the 5K pace and multiply by 1.00579, leaving what I will call the “reference pace” (RP). I don't know if this constant carries any particular meaning, and obviously, it's nowhere in the book; it's just the slope of the regression. 400m time is 400m at RP, minus 10 seconds. (That is 10 seconds absolute time, not 10 seconds/km. So if you have an 18:30 5K PB, you'll have an 18:36 5K at RP, which is 1:29 400m at RP, which then gives a 1:19 400m.)

Similarly: 600m is -13 seconds, 800m is -16 seconds, 1000m is -18 seconds, 1200m is also -18 seconds, 1600m is -16 seconds, and 2000m (which is specified, but seemingly never used in any of the programs) is -15 seconds. You can see two effects going against each other here; longer intervals mean more seconds to shave off for a given pace, but they also give lower pace, and thus the U-like shape.

And that's all there is to it. Happy running, and may there be a good race close to you!

Planet DebianDominique Dumont: Security gotcha with log collection on Azure Kubernetes cluster.

Azure Kubernetes Service provides a nice way to set up Kubernetes
cluster in the cloud. It’s quite practical as AKS is setup by default
with a rich monitoring and reporting environment. By default, all
container logs are collected, CPU and disk data are gathered. �

I used AKS to setup a cluster for my first client as a
freelance. Everything was nice until my client asked me why logs
collection was as expensive as the computer resources.💸

Ouch… 🤦

My first reflex was to reduce the amount of logs produced by all our
containers, i.e. start logging at warn level instead of info
level
. This reduced the amount of logs quite a lot.

But this did not reduce the cost of collecting logs, which looks like
to a be a common issue.

Thanks to the documentation provided by Microsoft, I was able to find
that ContainerInventory data table was responsible of more than 60%
of our logging costs.

What is ContainerInventory ? It’s a facility to monitor the content
of all environment variables from all containers.

Wait… What ? ⚠

Should we be worried about our database credentials which are, legacy
oblige, stored in environment variables ?

Unfortunately, the query shown below confirmed that, yes, we should:
the logs aggregated by Azure contains the database credentials of my
client.

ContainerInventory
| where TimeGenerated > ago(1h)

Having credentials collected in logs is lackluster from a security
point of view. 🙄

And we don’t need it because our environment variables do not change.

Well, it’s now time to fix these issues. 🛠

We’re going to:

  1. disable the collection of environment variables in Azure, which
    will reduce cost and plug the potential credential leak
  2. renew all DB credentials, because the previous credentials can be
    considered as compromised (The renewal of our DB passwords is quite
    easy with the script I provided to my client)
  3. pass credentials with files instead of environment variables.

In summary, the service provided by Azure is still nice, but beware of
the default configuration which may contain surprises.

I’m a freelance, available for hire. The https://code-straight.fr site
describes how I can help your projects.

All the best

 

CryptogramOn the Twitter Hack

Twitter was hacked this week. Not a few people's Twitter accounts, but all of Twitter. Someone compromised the entire Twitter network, probably by stealing the log-in credentials of one of Twitter's system administrators. Those are the people trusted to ensure that Twitter functions smoothly.

The hacker used that access to send tweets from a variety of popular and trusted accounts, including those of Joe Biden, Bill Gates, and Elon Musk, as part of a mundane scam -- stealing bitcoin -- but it's easy to envision more nefarious scenarios. Imagine a government using this sort of attack against another government, coordinating a series of fake tweets from hundreds of politicians and other public figures the day before a major election, to affect the outcome. Or to escalate an international dispute. Done well, it would be devastating.

Whether the hackers had access to Twitter direct messages is not known. These DMs are not end-to-end encrypted, meaning that they are unencrypted inside Twitter's network and could have been available to the hackers. Those messages -- between world leaders, industry CEOs, reporters and their sources, heath organizations -- are much more valuable than bitcoin. (If I were a national-intelligence agency, I might even use a bitcoin scam to mask my real intelligence-gathering purpose.) Back in 2018, Twitter said it was exploring encrypting those messages, but it hasn't yet.

Internet communications platforms -- such as Facebook, Twitter, and YouTube -- are crucial in today's society. They're how we communicate with one another. They're how our elected leaders communicate with us. They are essential infrastructure. Yet they are run by for-profit companies with little government oversight. This is simply no longer sustainable. Twitter and companies like it are essential to our national dialogue, to our economy, and to our democracy. We need to start treating them that way, and that means both requiring them to do a better job on security and breaking them up.

In the Twitter case this week, the hacker's tactics weren't particularly sophisticated. We will almost certainly learn about security lapses at Twitter that enabled the hack, possibly including a SIM-swapping attack that targeted an employee's cellular service provider, or maybe even a bribed insider. The FBI is investigating.

This kind of attack is known as a "class break." Class breaks are endemic to computerized systems, and they're not something that we as users can defend against with better personal security. It didn't matter whether individual accounts had a complicated and hard-to-remember password, or two-factor authentication. It didn't matter whether the accounts were normally accessed via a Mac or a PC. There was literally nothing any user could do to protect against it.

Class breaks are security vulnerabilities that break not just one system, but an entire class of systems. They might exploit a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in internet-enabled digital video recorders and webcams that allows an attacker to recruit those devices into a massive botnet. Or a single vulnerability in the Twitter network that allows an attacker to take over every account.

For Twitter users, this attack was a double whammy. Many people rely on Twitter's authentication systems to know that someone who purports to be a certain celebrity, politician, or journalist is really that person. When those accounts were hijacked, trust in that system took a beating. And then, after the attack was discovered and Twitter temporarily shut down all verified accounts, the public lost a vital source of information.

There are many security technologies companies like Twitter can implement to better protect themselves and their users; that's not the issue. The problem is economic, and fixing it requires doing two things. One is regulating these companies, and requiring them to spend more money on security. The second is reducing their monopoly power.

The security regulations for banks are complex and detailed. If a low-level banking employee were caught messing around with people's accounts, or if she mistakenly gave her log-in credentials to someone else, the bank would be severely fined. Depending on the details of the incident, senior banking executives could be held personally liable. The threat of these actions helps keep our money safe. Yes, it costs banks money; sometimes it severely cuts into their profits. But the banks have no choice.

The opposite is true for these tech giants. They get to decide what level of security you have on your accounts, and you have no say in the matter. If you are offered security and privacy options, it's because they decided you can have them. There is no regulation. There is no accountability. There isn't even any transparency. Do you know how secure your data is on Facebook, or in Apple's iCloud, or anywhere? You don't. No one except those companies do. Yet they're crucial to the country's national security. And they're the rare consumer product or service allowed to operate without significant government oversight.

For example, President Donald Trump's Twitter account wasn't hacked as Joe Biden's was, because that account has "special protections," the details of which we don't know. We also don't know what other world leaders have those protections, or the decision process surrounding who gets them. Are they manual? Can they scale? Can all verified accounts have them? Your guess is as good as mine.

In addition to security measures, the other solution is to break up the tech monopolies. Companies like Facebook and Twitter have so much power because they are so large, and they face no real competition. This is a national-security risk as well as a personal-security risk. Were there 100 different Twitter-like companies, and enough compatibility so that all their feeds could merge into one interface, this attack wouldn't have been such a big deal. More important, the risk of a similar but more politically targeted attack wouldn't be so great. If there were competition, different platforms would offer different security options, as well as different posting rules, different authentication guidelines -- different everything. Competition is how our economy works; it's how we spur innovation. Monopolies have more power to do what they want in the quest for profits, even if it harms people along the way.

This wasn't Twitter's first security problem involving trusted insiders. In 2017, on his last day of work, an employee shut down President Donald Trump's account. In 2019, two people were charged with spying for the Saudi government while they were Twitter employees.

Maybe this hack will serve as a wake-up call. But if past incidents involving Twitter and other companies are any indication, it won't. Underspending on security, and letting society pay the eventual price, is far more profitable. I don't blame the tech companies. Their corporate mandate is to make as much money as is legally possible. Fixing this requires changes in the law, not changes in the hearts of the company's leaders.

This essay previously appeared on TheAtlantic.com.

LongNowSix Ways to Think Long-term: A Cognitive Toolkit for Good Ancestors

Image for post
Illustration: Tom Lee at Rocket Visual

Human beings have an astonishing evolutionary gift: agile imaginations that can shift in an instant from thinking on a scale of seconds to a scale of years or even centuries. Our minds constantly dance across multiple time horizons. One moment we can be making a quickfire response to a text and the next thinking about saving for our pensions or planting an acorn in the ground for posterity. We are experts at the temporal pirouette. Whether we are fully making use of this gift is, however, another matter.

The need to draw on our capacity to think long-term has never been more urgent, whether in areas such as public health care (like planning for the next pandemic on the horizon), to deal with technological risks (such as from AI-controlled lethal autonomous weapons), or to confront the threats of an ecological crisis where nations sit around international conference tables, bickering about their near-term interests, while the planet burns and species disappear. At the same time, businesses can barely see past the next quarterly report, we are addicted to 24/7 instant news, and find it hard to resist the Buy Now button.

What can we do to overcome the tyranny of the now? The easy answer is to say we need more long-term thinking. But here’s the problem: almost nobody really knows what it is.

In researching my latest book, The Good Ancestor: How to Think Long Term in a Short-Term World, I spoke to dozens of experts — psychologists, futurists, economists, public officials, investors — who were all convinced of the need for more long-term thinking to overcome the pathological short-termism of the modern world, but few of them could give me a clear sense of what it means, how it works, what time horizons are involved and what steps we must take to make it the norm. This intellectual vacuum amounts to nothing less than a conceptual emergency.

Let’s start with the question, ‘how long is long-term?’ Forget the corporate vision of ‘long-term’, which rarely extends beyond a decade. Instead, consider a hundred years as a minimum threshold for long-term thinking. This is the current length of a long human lifespan, taking us beyond the ego boundary of our own mortality, so we begin to imagine futures that we can influence but not participate in ourselves. Where possible we should attempt to think longer, for instance taking inspiration from cultural endeavours like the 10,000 Year Clock (the Long Now Foundation’s flagship project), which is being designed to stay accurate for ten millennia. At the very least, when you aim to think ‘long-term’, take a deep breath and think ‘a hundred years and more’.

The Tug of War for Time

It is just as crucial to equip ourselves with a mental framework that identifies different forms of long-term thinking. My own approach is represented in a graphic I call ‘The Tug of War for Time’ (see below). On one side, six drivers of short-termism threaten to drag us over the edge of civilizational breakdown. On the other, six ways to think long-term are drawing us towards a culture of longer time horizons and responsibility for the future of humankind.

Image for post

These six ways to think long are not a simplistic blueprint for a new economic or political system, but rather comprise a cognitive toolkit for challenging our obsession with the here and now. They offer conceptual scaffolding for answering what I consider to be the most important question of our time: How can we be good ancestors?

The tug of war for time is the defining struggle of our generation. It is going on both inside our own minds and in our societies. Its outcome will affect the fate of the billions upon billions of people who will inhabit the future. In other words, it matters. So let’s unpack it a little.

Drivers of Short-termism

Amongst the six drivers of short-termism, we all know about the power of digital distraction to immerse us in a here-and-now addiction of clicks, swipes and scrolls. A deeper driver has been the growing tyranny of the clock since the Middle Ages. The mechanical clock was the key machine of the Industrial Revolution, regimenting and speeding up time itself, bringing the future ever-nearer: by 01700 most clocks had minute hands and by 01800 second hands were standard. And it still dominates our daily lives, strapped to our wrists and etched onto our screens.

Speculative capitalism has been a source of boom-bust turbulence at least since the Dutch Tulip Bubble of 01637, through to the 02008 financial crash and the next one waiting around the corner. Electoral cycles also play their part, generating a myopic political presentism where politicians can barely see beyond the next poll or the latest tweet. Such short-termism is amplified by a world of networked uncertainty, where events and risks are increasingly interdependent and globalised, raising the prospect of rapid contagion effects and rendering even the near-term future almost unreadable.

Looming behind it all is our obsession with perpetual progress, especially the pursuit of endless GDP growth, which pushes the Earth system over critical thresholds of carbon emissions, biodiversity loss and other planetary boundaries. We are like a kid who believes they can keep blowing up the balloon, bigger and bigger, without any prospect that it could ever burst.

Put these six drivers together and you get a toxic cocktail of short-termism that could send us into a blind-drunk civilizational freefall. As Jared Diamond argues, ‘short-term decision making’ coupled with an absence of ‘courageous long-term thinking’ has been at the root of civilizational collapse for centuries. A stark warning, and one that prompts us to unpack the six ways to think long.

Six Ways to Think Long-term

1. Deep-Time Humility: grasp we are an eyeblink in cosmic time

Deep-time humility is about recognising that the two hundred thousand years that humankind has graced the earth is a mere eyeblink in the cosmic story. As John McPhee (who coined the concept of deep time in 01980) put it: ‘Consider the earth’s history as the old measure of the English yard, the distance from the king’s nose to the tip of his outstretched hand. One stroke of a nail file on his middle finger erases human history.’

But just as there is deep time behind us, there is also deep time ahead. In six billion years, any creatures that will be around to see our sun die, will be as different from us, as we are from the first single-celled bacteria.

Yet why exactly do long-term thinkers need this sense of temporal humility? Deep time prompts us to consider the consequences of our actions far beyond our own lifetimes, and puts us back in touch with the long-term cycles of the living world like the carbon cycle. But it also helps us grasp our destructive potential: in an incredibly short period of time — only a couple of centuries — we have endangered a world that took billions of years to evolve. We are just a tiny link in the great chain of living organisms, so who are we to put it all in jeopardy with our ecological blindness and deadly technologies? Don’t we have an obligation to our planetary future and the generations of humans and other species to come?

2. Legacy Mindset: be remembered well by posterity

We are the inheritors of extraordinary legacies from the past — from those who planted the first seeds, built the cities where we now live, and made the medical discoveries we benefit from. But alongside the good ancestors are the ‘bad ancestors’, such as those who bequeathed us colonial and slavery-era racism and prejudice that deeply permeate today’s criminal justice systems. This raises the question of what legacies we will leave to future generations: how do we want to be remembered by posterity?

The challenge is to leave a legacy that goes beyond egoistic legacy (like a Russian oligarch who wants a wing of an art gallery named after them) or even familial legacy (like wishing to pass on property or cultural traditions to our children). If we hope to be good ancestors, we need to develop a transcendent ‘legacy mindset’, where we aim to be remembered well by the generations we will never know, by the universal strangers of the future.

We might look for inspiration in many places. The Māori concept of whakapapa (‘genealogy’), describes a continues lifeline that connects an individual to the past, present and future, and generates a sense of respect for the traditions of previous generations while being mindful of those yet to come. In Katie Paterson’s art project Future Library, every year for a hundred years a famous writer (the first was Margaret Atwood) is depositing a new work, which will remain unread until 02114, when they will all be printed on paper made from a thousand trees that have been planted in a forest outside Oslo. Then there are activists like Wangari Maathai, the first African woman to win the Nobel Peace Prize. In 01977 she founded the Green Belt Movement in Kenya, which by the time of her death in 02011 had trained more than 25,000 women in forestry skills and planted 40 million trees. That’s how to pass on a legacy gift to the future.

3. Intergenerational Justice: consider the seventh generation ahead

“Why should I care about future generations? What have they ever done for me?’ This clever quip attributed to Groucho Marx highlights the issue of intergenerational justice. This is not the legacy question of how we will be remembered, but the moral question of what responsibilities we have to the ‘futureholders’ — the generations who will succeed us.

One approach, rooted in utilitarian philosophy, is to recognise that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. How could we possibly ignore their wellbeing, and think that our own is of such greater value?

Image for post

Such thinking is embodied in the idea of ‘seventh-generation decision making’, an ethic of ecological stewardship practised amongst some Native American peoples such as the Oglala Lakota Nation in South Dakota: community decisions take into the account the impacts seven generations from the present. This ideal is fast becoming a cornerstone of the growing global intergenerational justice movement, inspiring groups such as Our Children’s Trust (fighting for the legal rights of future generations in the US) and Future Design in Japan (which promotes citizens’ assemblies for city planning, where residents imagine themselves being from future generations).

4. Cathedral thinking: plan projects beyond a human lifetime

Cathedral thinking is the practice of envisaging and embarking on projects with time horizons stretching decades and even centuries into the future, just like medieval cathedral builders who began despite knowing they were unlikely to see construction finished within their own lifetimes. Greta Thunberg has said that it will take ‘cathedral thinking’ to tackle the climate crisis.

Historically, cathedral thinking has taken different forms. Apart from religious buildings, there are public works projects such as the sewers built in Victorian London after the ‘Great Stink’ of 01858, which are still in use today (we might call this ‘sewer thinking’ rather than ‘cathedral thinking’). Scientific endeavours include the Svalbard Global Seed in the remote Arctic, which contains over one million seeds from more than 6,000 species, and intends to keep them safe in an indestructible rock bunker for at least a thousand years. We should also include social and political movements with long time horizons, such as the Suffragettes, who formed their first organisation in Manchester in 01867 and didn’t achieve their aim of votes for women for over half a century.

Inspiring stuff. But remember that cathedral thinking can be directed towards narrow and self-serving ends. Hitler hoped to create a Thousand Year Reich. Dictators have sought to preserve their power and privilege for their progeny through the generations: just look at North Korea. In the corporate world, Gus Levy, former head of investment bank Goldman Sachs, once proudly declared, ‘We’re greedy, but long-term greedy, not short-term greedy’.

That’s why cathedral thinking alone is not enough to create a long-term civilization that respects the interests of future generations. It needs to be guided by other approaches, such as intergenerational justice and a transcendent goal (see below).

5. Holistic Forecasting: envision multiple pathways for civilization

Numerous studies demonstrate that most forecasting professionals tend to have a poor record at predicting future events. Yet we must still try to map out the possible long-term trajectories of human civilization itself — what I call holistic forecasting — otherwise we will end up only dealing with crises as they hit us in the present. Experts in the fields of global risk studies and scenario planning have identified three broad pathways, which I call Breakdown, Reform and Transformation (see graphic below).

Image for post

Breakdown is the path of business-as-usual. We continue striving for the old twentieth-century goal of material economic progress but soon reach a point of societal and institutional collapse in the near term as we fail to respond to rampant ecological and technological crises, and cross dangerous civilizational tipping points (think Cormac McCarthy’s The Road).

A more likely trajectory is Reform, where we respond to global crises such as climate change but in an inadequate and piecemeal way that merely extends the Breakdown curve outwards, to a greater or lesser extent. Here governments put their faith in reformist ideals such as ‘green growth’, ‘reinventing capitalism’, or a belief that technological solutions are just around the corner.

A third trajectory is Transformation, where we see a radical shift in the values and institutions of society towards a more long-term sustainable civilization. For instance, we jump off the Breakdown curve onto a new pathway dominated by post-growth economic models such as Doughnut Economics or a Green New Deal.

Note the crucial line of Disruptions. These are disruptive innovations or events that offer an opportunity to switch from one curve onto another. It could be a new technology like blockchain, the rise of a political movement like Black Lives Matter, or a global pandemic like COVID-19. Successful long-term thinking requires turning these disruptions towards Transformative change and ensuring they are not captured by the old system.

6. Transcendent Goal: strive for one-planet thriving

Every society, wrote astronomer Carl Sagan, needs a ‘telos’ to guide it — ‘a long-term goal and a sacred project’. What are the options? While the goal of material progress served us well in the past, we now know too much about its collateral damage: fossil fuels and material waste have pushed us into the Anthropocene, the perilous new era characterised by a steep upward trend in damaging planetary indicators called the Great Acceleration (see graphic).

Image for post
See an enlarged version of this graphic here.

An alternative transcendent goal is to see our destiny in the stars: the only way to guarantee the survival of our species is to escape the confines of Earth and colonise other worlds. Yet terraforming somewhere like Mars to make it habitable could take centuries — if it could be done at all. Additionally, the more we set our sights on escaping to other worlds, the less likely we are to look after our existing one. As cosmologist Martin Rees points out, ‘It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these problems here.’

That’s why our primary goal should be to learn to live within the biocapacity of the only planet we know that sustains life. This is the fundamental principle of the field of ecological economics developed by visionary thinkers such as Herman Daly: don’t use more resources than the earth can naturally regenerate (for instance, only harvest timber as fast as it can grow back), and don’t create more wastes than it can naturally absorb (so avoid burning fossil fuels that can’t be absorbed by the oceans and other carbon sinks).

Once we’ve learned to do this, we can do as much terraforming of Mars as we like: as any mountaineer knows, make sure your basecamp is in order with ample supplies before you tackle a risky summit. But according to the Global Footprint Network, we are not even close and currently use up the equivalent of around 1.6 planet Earths each year. That’s short-termism of the most deadly kind. A transcendent goal of one-planet thriving is our best guarantee of a long-term future. And we do it by caring about place as much as rethinking time.

Bring on the Time Rebellion

So there is a brief overview of a cognitive toolkit we could draw on to survive and thrive into the centuries and millennia to come. None of these six ways is enough alone to create a long-term revolution of the human mind — a fundamental shift in our perception of time. But together — and when practised by a critical mass of people and organisations — a new age of long-term thinking could emerge out of their synergy.

Is this a likely prospect? Can we win the tug of war against short-termism?

‘Only a crisis — actual or perceived — produces real change,’ wrote economist Milton Friedman. Out of the ashes of World War Two came pioneering long-term institutions such as the World Health Organisation, the European Union and welfare states. So too out of the global crisis of COVID-19 could emerge the long-term institutions we need to tackle the challenges of our own time: climate change, technology threats, the racism and inequality structured into our political and economic systems. Now is the moment for expanding our time horizons into a longer now. Now is the moment to become a time rebel.


Roman Krznaric is a public philosopher, research fellow of the Long Now Foundation, and founder of the world’s first Empathy Museum. His latest book is The Good Ancestor: How to Think Long Term in a Short-Term World. He lives in Oxford, UK. @romankrznaric

Note: All graphics from The Good Ancestor: How to Think Long Term in a Short-Term World by Roman Krznaric. Graphic design by Nigel Hawtin. Licensed under CC BY-NC-ND.

Worse Than FailureMega-Agile

A long time ago, way back in 2009, Bruce W worked for the Mega-Bureaucracy. It was a slog of endless forms, endless meetings, endless projects that just never hit a final ship date. The Mega-Bureaucracy felt that the organization which manages best manages the most, and ensured that there were six tons of management overhead attached to the smallest project.

After eight years in that position, Bruce finally left for another division in the same company.

But during those eight years, Bruce learned a few things about dealing with the Mega-Bureaucracy. His division was a small division, and while Bruce needed to interface with the Mega-Bureaucracy, he could shield the other developers on his team from it, as much as possible. This let them get embedded into the business unit, working closely with the end users, revising requirements on the fly based on rapid feedback and a quick release cycle. It was, in a word, "Agile", in the most realistic version of the term: focus on delivering value to your users, and build processes which support that. They were a small team, and there were many layers of management above them, which served to blunt and filter some of the mandates of the Mega-Bureaucracy, and that let them stay Agile.

Nothing, however, protects against management excess than a track record of success. While they had a reputation for being dangerous heretics: they released to test continuously and releasing to production once a month, they changed requirements as needs changed, meaning what they delivered was almost never what they specced, but it was what their users needed, and worst of all, their software defeated all the key Mega-Bureaucracy metrics. It performed better, it had fewer reported defects, it return-on-investment metrics their software saved the division millions of dollars in operating costs.

The Mega-Bureaucracy seethed at these heretics, but the C-level of the company just saw a high functioning team. There was nothing that the Bureaucracy could do to bring them in line-

-at least until someone opened up a trade magazine, skimmed the buzzwords, and said, "Maybe our processes are too cumbersome. We should do Agile. Company wide, let's lay out an Agile Process."

There's a huge difference between the "agile" created by a self-organizing team, that grows based on learning what works best for the team and their users, and the kind of "agile" that's imposed from the corporate overlords.

First, you couldn't do Agile without adopting the Agile Process, which in Mega-Bureaucracy-speak meant "we're doing a very specific flavor of scrum". This meant morning standups were mandated. You needed a scrum-master on the team, which would be a resource drawn from the project management office, and well, they'd also pull double duty as the project manager. The word "requirements" was forbidden, you had to write User Stories, and then estimate those User Stories as taking a certain number of hours. Then you could hold your Sprint Planning meeting, where you gathered a bucket of stories that would fit within your next sprint, which would be a 4-week cadence, but that was just the sprint planning cadence. Releases to production would happen only quarterly. Once user stories were written, they were never to be changed, just potentially replaced with a new story, but once a story was added to a sprint, you were expected to implement it, as written. No changes based on user feedback. At the end of the sprint, you'd have a whopping big sprint retrospective, and since this was a new process, instead of letting the team self-evaluate in private and make adjustments, management from all levels of the company would sit in on the retrospectives to be "informed" about the "challenges" in adopting the new process.

The resulting changes pleased nearly no one. The developers hated it, the users, especially in Bruce's division, hated it, management hated it. But the Mega-Bureaucracy had won; the dangerous heretics who didn't follow the process now were following the process. They were Agile.

That is what motivated Bruce to transfer to a new position.

Two years later, he attended an all-IT webcast. The CIO announced that they'd spun up a new pilot development team. This new team would get embedded into the business unit, work closely with the end user, revise requirements on the fly based on rapid feedback and a continuous release cycle. "This is something brand new for our company, and we're excited to see where it goes!"

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianShirish Agarwal: Hearing loss, pandemic, lockdown

Sorry for not being on blog for sometime, the last few months have been brutal. While I am externally ok, because of the lockdown I sensed major hearing loss. First, I thought it may be a hallucination or something but as it persisted for days, I got myself checked and found out that I got 80% hearing loss in my right ear. How and why I don’t know. Is this NIHL or some other kind of hearing loss is yet to be ascertained. I do live what is and used to be one of the busiest roads in the city, now for last few months not so much. On top of it, you have various other noises.

Tinnitus

I also experienced Tinnitus which again I perceived to be a hallucination but found it’s not. I have no clue if my eiplepsy has anything to do with hearing loss or both are different. I did discover that while today we know that something like Tinnitus exists, just 10-15 years back, people might mistake it for madness. In a way it is madness because you are constantly hearing sound, music etc. 24×7 , that is enough to drive anybody mad.

During this brief period, did learn what an Otoscope is . I did get audiometry tests done but need to get at least a second or if possible also a third opinion but those will have to wait as the audio clinics are about 8-10 kms. away. In the open-close-open-close environment just makes it impossible to figure out the time, date and get it done. After that is done then probably get a hearing device, probably a Siemens Signia hearing aid. The hearing aids are damn expensive, almost 50k per piece and they probably have a lifetime of about 5-6 years, so it’s a bit of a expensive proposition. I also need a second or/and third opinion on the audiometry profile so I know things are correct. All of these things are gonna take time.

Pandemic Situation in India and Testing

Coincidentally, was talking to couple of people about this. It is sad to see that we have the third highest number of covid cases at 1/10th the tests we are doing vis-a-vis U.S.A. According to statistical site ourworldindata , we seem to be testing 0.22 per thousand people compared to 2.28 people per thousand done by United States. Sadly it doesn’t give the breakup of the tests, from what I read the PCR tests are better than the antibody tests, a primer shares the difference between the two tests. IIRC, the antibody tests are far cheaper than the swab tests but swab tests are far more accurate as it looks for the virus’s genetic material (RNA) . Anyways coming to the numbers, U.S. has a population of roughly 35 crores taking a little bit liberty from numbers given at popclock . India meanwhile has 135 crore or almost four times the population of U.S. and the amount of testing done is 1/10th as shared above. Just goes to share where the GOI priorities lie . We are running out of beds, ventilators and whatever else there is. Whatever resources are there are being used for covid patients and they are being charged a bomb. I have couple of hospitals near my place and the cost of a bed in an isolation ward is upward of INR 100k and if you need a ventilator then add another 50k . And in moment of rarity, the differences between charges of private and public are zero. Meaning there is immense profiteering happening it seems in the medical world. Heck, even the Govt. is on the act where they are charging 18% GST on sanitizers. If this is not looting then I dunno what is.

Example of Medical Bills people have to pay.

China, Nepal & Diplomacy

While everybody today knows how China has intruded and captured quite a part of Ladakh, this wasn’t the case when they started in April. That time Ajai Shukla had shared this with the top defence personnel but nothing came of it. Then on May 30th he broke/shared the news with the rest of the world and was immediately branded anti-national, person on Chinese payroll and what not. This is when he and Pravin Sawnhey of Force Magazine had both been warning of the same from last year itself. Pravin, has a youtube channel and had been warning India against Chinese intentions from 2015 and even before that. He had warned repeatedly that our obsession with the Pakistan border meant that we were taking eyes of the border with China which spans almost 2300 odd kms. going all the way to Arunachal Pradesh. A good map which shows the conflict can be found at dw.com which I am sharing/reproducing below –

India-China Border Areas – Copyright DW.com 2020

Note:- I am sharing a neutral party’s rendering of the border disputes or somebody who doesn’t have much at stake as the two countries have so that things could be looked at little objectively.

The Prime Minister on the other hand, made the comment which made galvanising a made-up word into verb . It means to go without coming in. In fact, several news sites shared the statement told by the Prime Minister and the majority of people were shocked. In fact, there had been reports that he gave the current CDS, General Rawat, a person of his own choosing, a peace of his mind. But what lead to this confrontation in the first place ? I think many pieces are part of that puzzle, one of the pieces are surely the cutting of defense budget for the last 6 years, Even this year, if you look at the budget slashes done in the earlier part of the year when he shared how HAL had to raise loans from the market to pay salaries of its own people. Later he shared how the Govt. was planning to slash the defence budget. Interestingly, he had also shared some of the reasons which reaffirm that it is the only the Govt. which can solve some of the issues/conundrum –

“First, it must recognize that our firms competing for global orders are up against rivals that are being supported by their home governments with tax and export incentives and infrastructure that almost invariably surpasses India’s. Our government must provide its aerospace firms with a level playing field, if not a competitive advantage. The greatest deterrent to growth our companies face is the high cost of capital and lack of access to funds. In several cases, Indian MSMEs have had to turn down offers to build components and assemblies for global OEM supply chains simply because the cost of capital to create the shop floor and train the personnel was too high. This resulted in a loss of business and a missed opportunity for creating jobs and skills. To overcome this, the government could create a sector specific “A&D Fund” to provide low cost capital quickly to enable our MSMEs to grab fleeting business opportunities. ” – Ajai Shukla, blogpost – 13th March 2020 .

And then reporting on 11th May 2020 itself, CDS Gen. Rawat himself commented on saving the budget, they were in poor taste but still he shared what he thought about it. So, at the end of it one part of the story. The other part of the story probably lies in India’s relations with its neighbors and lack of numbers in diplomats and diplomacy. So let me cover both the things one by one .

Diplomats, lack of numbers and hence the hands we are dealth with

When Mr. Modi started his first term, he used the term ‘Maximum Governance, Minimum Government’ but sadly cut those places where it indeed needs more people, one of which is diplomacy. A slightly dated 2012 article/opinion shared writes that India needs to engage with the rest of the world and do with higher number. Cut to 2020 and the numbers more or less remain the same . What Mr. Modi tried to do is instead of using diplomats, he tried to use his charm and hugopolicy for lack of a better term. 6 years later, here we are. After 200 trips abroad, not a single trade agreement to show what he done. I could go on but both time and energy are not on my side hence now switching to Nepal

Nepal, once friend, now enemy ?

Nepal had been a friend of India for 70 odd years, what changed in the last few years that it changed from friend to enemy ? There had been two incidents in recent memory that changed the status quo. The first is the 2015 Nepal blockade . Now one could argue it either way but the truth is that Nepal understood that it is heavily dependent on India hence as any sovereign country would do in its interest it also started courting China for imports so there is some balance.

The second one though is one of our own making. On December 16, 2014 RBI allowed Nepali citizens to have cash upto INR 25,000/- . Then in 2016 when demonetization was announced, they said that people could exchange only upto INR 4,500/- which was far below the limit shared above. And btw, before people start blaming just RBI for the decision, FEMA decisions are taken jointly by the finance ministry (FE) as well as ministry of external affairs (MEA) . So without them knowing the decision could not have been taken when announcing it. The result of lowering of demonetization is what made Nepal move more into Chinese hands and this has been shared by number of people in numerous articles in different websites. The wire interview with the vice-chairman of Niti Ayog is pretty interesting. The argument that Nepal show give an estimate of how much old money is there falls flat when in demonetization itself, it was thought of that around 30-40% was black money and would not be returned but by RBI’s own admissions all 99.3% of the money was returned. Perhaps they should have consulted Prof. Arun Kumar of JNU who has extensively written and studied the topic before doing that fool-hardy step. It is the reason that since then, an economy which was searing at 9% has been contracting ever since, I could give a dozen articles stating that, but for the moment, just one will suffice. The slowing economy and the sharp divisions between people based on either outlook, religion or whatever also encouraged China to attack us. This year is not good for India. The only thing I hope Indians and people all over do is just maintain physical distances, masks and somehow survive till middle of next year without getting infected when probably most of the vaccine candidates have been trialed, results are in and we have a ready vaccine. I do hope that at least for once, ICMR shares data even after the vaccine is approved, whichever vaccine. Till later.

,

Planet DebianEnrico Zini: More notable people

René Carmille (8 January 1886 – 25 January 1945) was a French humanitarian, civil servant, and member of the French Resistance. During World War II, Carmille saved tens of thousands of Jews in Nazi-occupied France. In his capacity at the government's Demographics Department, Carmille sabotaged the Nazi census of France, saving tens of thousands of Jewish people from death camps.
Gino Strada (born Luigi Strada; 21 April 1948) is an Italian war surgeon and founder of Emergency, a UN-recognized international non-governmental organization.
Il morbo di K è una malattia inventata nel 1943, durante la Seconda guerra mondiale, da Adriano Ossicini insieme al dottor Giovanni Borromeo per salvare alcuni italiani di religione ebraica dalle persecuzioni nazifasciste a Roma.[1][2][3][4]
Stage races

Sam VargheseThe Indian Government cheated my late father of Rs 332,775

Back in 1976. the Indian Government, for whom my father, Ipe Samuel Varghese, worked in Colombo, cheated him of Rs 13,500 – the gratuity that he was supposed to be paid when he was dismissed from the Indian High Commission (the equivalent of the embassy) in Colombo.

That sum, adjusted for inflation, works out to Rs 332,775 in today’s rupees.

But he was not paid this amount because the embassy said he had contravened rules by working at a second job, something which everyone at the embassy was doing, because what people were paid was basically a starvation wage. But my father had rubbed against powerful interests in the embassy who were making money by taking bribes from poor Sri Lankan Tamils who were applying for Indian passports to return to India.

But let me start at the beginning. My father went to Sri Lanka (then known as Ceylon) in 1947, looking for employment after the war. He took up a job as a teacher, something which was his first love. But in 1956, when the Sri Lankan Government nationalised the teaching profession, he was left without a job.

It was then that he began working for the Indian High Commission which was located in Colpetty, and later in Fort. As he was a local recruit, he was not given diplomatic status. The one benefit was that our family did not need visas to stay in Sri Lanka – we were all Indian citizens – but only needed to obtain passports once we reached the age of 14.

As my father had six children, the pay from the High Commission was not enough to provide for the household. He would tutor some students, either at our house, or else at their houses. He was very strict about his work, and was unwilling to compromise on any rules.

There were numerous people who worked alongside him and they would occasionally take a bribe from here and there and push the case of some person or the other for a passport. The Tamils, who had gone to Sri Lanka to work on the tea plantations, were being repatriated under a pact negotiated by Sirima Bandaranaike, the Sri Lankan prime minister, and Lal Bahadur Shastri, her Indian counterpart. It was thus known as the Sirima-Shastri pact.

There was a lot of anti-Tamil sentiment brewing in Sri Lanka at the time, feelings that blew up into the civil war from 1983 onwards, a conflict that only ended in May 2009. Thus, many Tamils were anxious and wanted to do whatever it took to get an Indian passport.

And in this, they found many High Commission employees more than willing to accept bribes in order to push their cases. But they came up against a brick wall in my father. There was another gentleman who was an impediment too, a man named Navamoni. The others used to call him Koranga Moonji Dorai – meaning monkey face man – as he was a wizened old man. He would lose his temper and shout at people when they tried to mollify him with this or that to push their cases.

Thus, it was only a matter of time before some of my father’s colleagues went to the higher-ups and complained that he was earning money outside his High Commission job. They all were as well, but nobody had rubbed up against the powers-that-be. By then, due to his competence, my father had been put in charge of the passport section, a very powerful post, because he could approve or turn down any application.

The men who wanted to make money through bribes found him a terrible obstacle. One day in May, when my mother called up the High Commission, she was told that my father no longer worked there. Shocked, she waited until he came home to find out the truth. We had no telephone at home.

The family was not in the best financial position at the time. We had a few weeks to return to India as we had been staying in Sri Lanka on the strength of my father’s employment. And then came the biggest shock: the money my father had worked for all those 20 years was denied to him.

We came back to India by train and ferry; we could not afford to fly back. It was a miserable journey and for many years after that we suffered financial hardship because we had no money to tide us over that period.

Many years later, after I migrated to Australia, I went to Indian Consulate in Coburg, a suburb of Melbourne, to get a new passport, I happened to speak to the consul and asked him what I should do with my old passport. He made my blood boil by telling me that it was my patriotic duty to send it by post to the Indian embassy in Canberra. I told him that I owed India nothing considering the manner in which it had treated my father. And I added that if the Indian authorities wanted my old passport, then they could damn well pay for the postage. He was not happy with my reply.

India is the only country in the world which will not restore a person’s citizenship if he asks for it in his later years for sentimental reasons, just so that he can die in the land of his birth. India is also the only country that insists its own former citizens obtain a visa to enter what is their own homeland. Money is not the only thing for the Indian Government; it is everything.

Every other country will restore a person’s citizenship in their latter years of they ask for it for sentimental reasons. Not India.

Planet DebianRuss Allbery: PGP::Sign 1.01

This is mostly a test-suite fix for my Perl module to automate creation and verification of detached signatures.

The 1.00 release of PGP::Sign added support for GnuPG v2 and changed the default to assume that gpg is GnuPG v2, but this isn't the case on some older operating systems (particularly non-Linux ones). That in turn caused chaos for automated testing.

This release fixes the test suite to detect when gpg is GnuPG v1 and run the test suite accordingly, trying to exercise as much of the module as possible in that case. It also fixes a few issues found by Perl::Critic::Freenode.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

,

Planet DebianChris Lamb: The comedy is over

By now, everyone must have seen the versions of comedy shows with the laugh track edited out. The removal of the laughter doesn't just reveal the artificial nature of television and how it conscripts the viewer into laughing along; by subverting key conversational conventions, it reveals some of the myriad and subtle ways humans communicate with one another:

Although the show's conversation is ostensibly between two people, the viewer comprises a silent third actor through which they, and therefore we, are meant to laugh along with. Then, when this third character is forcibly muted, the viewer not only has to endure the stilted gaps, they also sense an uncanny loss of unfamiliarity by losing their 'own' part in the script.

A similar phonenomena can be seen in other artforms. In Garfield Minus Garfield, the forced negative spaces that these pauses introduce are discomfiting, almost to the level of performance art:

But when the technique is applied to other TV shows such as The Big Bang Theory, it is unsettling in entirely different ways, exposing the dysfunctional relationships and the adorkable mysogny at the heart of the show:

Once you start to look for it, the ur-elements of the audience, response and timing in the way we communicate are everywhere, from the gaps we leave so that others instinctively know when you have finished speaking, to the myriad of ways you can edit a film. These components are always present, it is only when one of them is taken away that they become more apparent. Today, the small delays added by videoconferencing adds an uncanny awkwardness to many of our everyday interactions too. It is said that "comedy is tragedy plus timing", so it is unsurprising that Zoom's undermining of timing leads, by this simple calculus of human interactions, to feelings of... tragedy.


§


Leaving aside the usual comments about Pavlovian conditioning and the shows that are the exceptions, complaints against canned laughter are the domain of the pub bore. I will therefore only add two brief remarks. First, rather than being cynically added to artificially inflate the lack of 'real' comedy, laugh tracks were initially added to replicate the live audience of existing shows. In other words, without a laugh track, these new shows might have ironically appeared almost as eerie as the fan edits cited above are today.

Secondly, although laugh tracks are described as "false", this is not entirely correct. After all, someone did actually laugh, even if it was for an entirey different joke. In his Simulacra and Simulation, cultural theorist Jean Baudrillard might have poetically identified canned laughter as a "reflection of a profound reality", rather than an outright falsehood. One day, when this laughter becomes entirely algorithmically generated, Baudrillard would describe it as "an order of sorcery", placing it metaphysically on the same level as the entirely pumpkin-free Pumpkin Spiced Latte.


§


For a variety of reasons I recently decided to try interacting with various social media platforms in new ways. One way of loosening my addiction to this pornography of the amygdala was to hide the number of replies, 'likes' and related numbers:

The effect of installing this extension was immediate. I caught my eyes darting to where the numbers had been and realised I had been subconsciously looking for the input — and perhaps even the outright validation — of the masses. To be sure, these numbers can be relevant and sometimes useful, but they do implicitly involve delegating part of your responsibility of thinking for yourself to the vox populi, or the Greek chorus of the 21st century.

Like many of you reading this, I am sure I told myself that the number of 'likes' has no bearing on whether I should agree with something, but hiding the numbers reveals much of this might have been a convenient fiction; as an entire century of discoveries in behavioural economics has demonstrated, all the pleasingly-satisfying arguments for rational free-market economics stand no chance against our inherent buggy mammalian brains.


§


Tying a few things together, when attempting to doomscroll through social media without these numbers, I realised that social media without the scorecard of engagement is almost exactly like watching these shows without the laugh track.

Without the number of 'retweets', the lazy prompts to remind you exactly when, how and for how much to respond are removed, and replaced with the same stilted silences of those edited scenes from Friends. At times, the existential loneliness of Garfield Minus Garfield creeps in too, and there is more than enough of the dysfunctional, validation-seeking and parasocial 'conversations' of The Big Bang Theory. Most of all, the whole exercise permits a certain level of detached, critical analysis, allowing one to observe that the platforms often feel like a pre-written script with your 'friends' cast as actors, all perpetuated on the heady fumes of rows INSERT-ed into a database on the other side of the world.

I'm not quite sure how this will affect my usage of the platforms, and any time spent away from these sites may mean fewer online connections at a time when we all need them the most. But as the Karal Marling, professor at the University of Minnesota wrote about artificial audiences: "Let me be the laugh track."

Planet DebianAndrew Cater: Debian Stretch release 9.13 - testing of images now complete 202007181950 or so

And we're getting a lot closer. Last edits to the wiki for the testing page. Tests marked as passed. There are a bunch of architectures where the media images are still to be built and the source media will still need to be written as well.

It's been a really good (and fairly long day). I'm very glad indeed that I'm not staying up until the bitter end. It's always fun - I'm also very glad that, as it panned out, we didn't end up also doing the next Buster point release this weekend - it would have been a massive workload.

So Stretch passes to LTS, Jessie has passed to ELTS - and we go onwards and upwards to the next point release - and then, in due course, to Bullseye when it's ready (probably the first quarter of next year?).

As ever, thanks due to all involved - the folks behind the scenes who maintain cdimage.debian.org, to Sledge, RattusRattus, Isy and schweer. Done for about a fortnight or so until the next Buster release.

I will try to blog on other things, honest :)

Planet DebianAndrew Cater: Debian Stretch 9.13 release - blog post 2 - testing of basic .iso images ongoing as at 202007181655

The last of the tests for the basic .iso images are finishing. Live image testing is starting. Lots from RattusRattus, Isy, Sledge and schweer besides myself. New this time round is an experiment in videoconference linking which has made this whole experience much more human in lockdown - and means I'm missing Cambridge.

Two questions that have come up as we've been going through. There are images made for Intel-based mac that were first made for Jessie or so: there are also images for S390X. These can't be tested because none of the testers have the hardware. Nobody has yet recorded issues with them but it could be that nobody has ever used them recently or has reported problems

If nobody is bothered with these: they are prime candidates for removal prior to Bullseye.

Planet DebianDirk Eddelbuettel: tint 0.1.3: Fixes for html mode, new demo

A new version 0.1.3 of the tint package arrived at CRAN today. It corrects some features for html output, notably margin notes and references. It also contains a new example for inline references.

The full list of changes is below.

Changes in tint version 0.1.3 (2020-07-18)

  • A new minimal demo was added showing inline references (Dirk addressing #42).

  • Code for margin notes and reference in html mode was updated with thanks to tufte (Dirk in #43 and #44 addressing #40).

  • The README.md was updated with a new 'See Also' section and a new badge.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: drat 0.1.8: Minor test fix

drat user

A new version of drat arrived on CRAN today. This is a follow-up release to 0.1.7 from a week ago. It contains a quick follow-up by Felix Ernst to correct on of the tests which misbehaved under the old release of R still being tested at CRAN.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

As your mother told you: Friends don’t let friends install random git commit snapshots. Rolled-up releases it is. drat is easy to use, documented by five vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.8 (2020-07-18)

  • The archive pruning test code was corrected for r-oldrel (Felix Ernst in #105 fixing #104).

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.74

Laptop Mode Tools 1.74

Laptop Mode Tools version 1.74 has been released. This release includes important bug fixes, some defaults settings updated to current driver support in Linux and support for devices with nouveau based nVIDIA cards.

A filtered list of changes is mentioned below. For the full log, please refer to the git repository

1.74 - Sat Jul 18 19:10:40 IST 2020

* With 4.15+ kernels, Linux Intel SATA has a better link power
  saving policy, med_power_with_dipm, which should be the recommended
  one to use
* Disable defaults for syslog logging
* Initialize LM_VERBOSE with default to disabled
* Merge pull request #157 from rickysarraf/nouveau
* Add power saving module for nouveau cards
* Disable ethernet module by default
* Add board-specific folder and documentation
* Add execute bit on module radeon-dpm
* Drop unlock because there is no lock acquired

Resources

What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

Planet DebianAndrew Cater: Debian "Stretch" 9.13 release preparations ongoing

Just checking in. Debian "Jessie" == oldoldstable == Debian 8 was the previous Debian Long Term Support release. Debian LTS seeks to provide support for Debian releases for five years. LTS support for Jessie ended on 30th June 2020.

A limited subset of Jessie will now move to ELTS - Extended Long Term Support and another two years projected support.

Neither LTS nor ELTS are supported  any longer by the main Debian folks: instead, they are supported on a commercial basis by a group of Debian volunteers and companies, coordinated by a company led by Raphael Hertzog.

Debian releases are fully supported by the Debian project for two years after the release of the next version.  Today is the final release of Stretch by Debian to incorporate security fixes and so on up to the handover to LTS.

if you are currently running Stretch: you do not need the new CD images. Apt / apt-get update will supply you the updates up until today. Hereafter, Stretch will be supported only as LTS - see LTS


Planet DebianDavid Bremner: git-annex and ikiwiki, not as hard as I expected

Background

So apparently there's this pandemic thing, which means I'm teaching "Alternate Delivery" courses now. These are just like online courses, except possibly more synchronous, definitely less polished, and the tuition money doesn't go to the College of Extended Learning. I figure I'll need to manage share videos, and our learning management system, in the immortal words of Marie Kondo, does not bring me joy. This has caused me to revisit the problem of sharing large files in an ikiwiki based site (like the one you are reading).

My goto solution for large file management is git-annex. The last time I looked at this (a decade ago or so?), I was blocked by git-annex using symlinks and ikiwiki ignoring them for security related reasons. Since then two things changed which made things relatively easy.

  1. I started using the rsync_command ikiwiki option to deploy my site.

  2. git-annex went through several design iterations for allowing non-symlink access to large files.

TL;DR

In my ikiwiki config

    # attempt to hardlink source files? (optimisation for large files)
    hardlink => 1,

In my ikiwiki git repo

$ git annex init
$ git annex add foo.jpg
$ git commit -m&aposadd big photo&apos
$ git annex adjust --unlock                 # look ikiwiki, no symlinks
$ ikiwiki --setup ~/.config/ikiwiki/client  # rebuild my local copy, for review
$ ikiwiki --setup /home/bremner/.config/ikiwiki/rsync.setup --refresh  # deploy

You can see the result at photo

Kevin RuddSMH: Stimulus Opportunity Knocks for Climate Action

By Kevin Rudd and Patrick Suckling

As the International Monetary Fund recently underlined in sharply revising down global growth prospects, recovering from the biggest peacetime shock to the global economy since the Great Depression will be a long haul.

There is a global imperative to put in place the strongest, most durable economic recovery. This is not a time for governments to retreat. Recovery will require massive and sustained support.

At the same time, spending decisions by governments now will shape our economic future for decades to come. In other words, we have a once-in-a-generation opportunity and can’t blow it.

But it’s looking like we might. This is because too few stimulus packages globally are reaping the double-dividend of both investing in growth and jobs, and in the transition to low emissions, more climate-resilient economies. And in Australia, this means we risk lagging even further behind the rest of the world as a result.

As Australia’s summer of hell demonstrated, climate change is only getting worse. It remains the greatest threat to our future welfare and economic prosperity. And while the world has legitimately been preoccupied with COVID-19, few have noticed that this year is on track to be the warmest in recorded history. Perhaps even fewer still have also made the connection between climate and biodiversity habitat loss and the outbreak of infectious diseases.

Stimulus decisions that do not address this climate threat therefore don’t just sell us short; they sell us out. And they cut against the grain of the global economy. This is the irreducible logic that flows from the 2015 Paris Agreement to which the world – including Australia – signed up to.

Unfortunately, as things stand today, many of the biggest stimulus efforts around the world are in danger of failing this logic.

For example, the US economic recovery is heavily focused on high emitting industries. The same is true in China, India, Japan and the large South-East Asian economies.

In fact, Beijing is approving plans for new coal-fired power plants at the fastest rate since 2015. And whether these plants are now actually built by China’s regional and provincial governments is increasingly becoming the global bellwether for whether we will emerge from this crisis better or worse off in the global fight against climate change.

And for our own part, Australia’s COVID Recovery Commission has placed limited emphasis on renewables despite advances in energy storage technologies and plummeting costs.

But as we know from our experience in the Global Financial Crisis a decade ago, it is entirely possible to design an economic recovery that is also good for the planet. This means investing in clean energy, energy efficiency systems, new transport systems, more sustainable homes and buildings and improved agricultural production, water and waste management. In fact, as McKinsey recently found, government spending on renewable energy technologies creates five more jobs per million dollars than spending on fossil fuels.

Despite these cautionary tales, there are thankfully also bright spots.

Take the European Union and its massive 750 billion Euro stimulus package. It will invest heavily in areas like energy efficiency, turbocharging renewable energy, accelerating hydrogen technologies, rolling out clean transport and promoting the circular economy.

To be fair, China is also emphasising new infrastructure like electric transport in its US$500 billion stimulus package. India is doubling down on its world-leading renewable energy investments. Indonesia has announced a major investment in solar energy. And Japan and South Korea are now announcing climate transition spending. But whether these are just bright spots amongst a dark haze of pollution, or genuinely light the way is the key question that confronts these economies.

In Australia, the government has confirmed significant investment in pumped hydro-power for “Snowy 2.0.” The government has also indicated acceleration of important projects such as the Marinus Link to ensure more renewable energy from Tasmania for the mainland. But much more is now needed.

An obvious starting point could be a nation-building stimulus investment around our decrepit energy system. By now the federal and state governments have a much stronger grasp of what we need for success, encouragingly evident on the recent $2 billion Commonwealth-NSW government package for better access, security and affordability.

Turbocharging this with a stimulus package for more renewable energy and storage of all sorts (including hydrogen), accompanying extension and stabilisation technologies for our electricity grid, and investment in dramatically improving energy efficiency would – literally and figuratively – power our economy forward.

In the aftermath of our drought and bushfires, another obvious area for nation-building investment is our land sector. Farm productivity can be dramatically improved by precision agriculture and regenerative farming technologies while building resilience to drought. New sources of revenue for farmers can be created through soil carbon and forest carbon farming – with carbon trading from these activities internationally set to be worth hundreds of billions of dollars over the coming decade.

Importantly, the Australian business community is not just calling for policy certainty, but actively ushering in change themselves. The Australian Industry Group, for instance, has called for a stronger climate-focused recovery. And in recent days, our largest private energy generator, AGL, has announced a significant strengthening of its commitment to climate transition, linking performance pay to progress in the company’s goal of achieving net-zero emissions by 2050 – a goal that BHP, Qantas and every Australian state and territory has signed up. HESTA has also announced it will be the first major Australian superannuation fund to align its investment portfolio to this end as well.

These sort of decisions are being replicated at a growing rate by companies around the world and show that business is leading and increasingly it is time for governments – including our own – to do the same. Whether we can use this crisis as an opportunity to emerge in a better place to tackle other global challenges remains to be seen, but rests on many of the decisions that will continue to be taken in the months to come.

Kevin Rudd is a former Prime Minister of Australia and now President of the Asia Society Policy Institute in New York. 

Patrick Suckling is a Senior Fellow at the Asia Society Policy Institute; Senior Partner at Pollination (pollinationgroup.com), a specialist climate investment and advisory firm; and was Australia’s Ambassador for the Environment.

 

First published in the Sydney Morning Herald and The Age on 18 February 2020.

The post SMH: Stimulus Opportunity Knocks for Climate Action appeared first on Kevin Rudd.

Planet DebianDima Kogan: Converting images while extracting a tarball

Last week at the lab I received a data dump: a gzip-compressed tarball with lots of images in it. The images are all uncompressed .pgm, with the whole tarball weighing in at ~ 1TB. I tried to extract it, and after chugging all day, it ran out of disk space. Added more disk, tried again: out of space again. Just getting a listing of the archive contents (tar tvfz) took something like 8 hours.

Clearly this is unreasonable. I made an executive decision to use .jpg files instead: I'd take the small image quality hit for the massive gains in storage efficiency. But the tarball has .pgm and just extracting the thing is challenging. So I'm now extracting the archive, and converting all the .pgm images to .jpg as soon as they hit disk. How? Glad you asked!

I'm running two parallel terminal sessions (I'm using screen, but you can do whatever you like).

Session 1

< archive.tar.gz unpigz -p20 | tar xv

Here I'm just extracting the archive to disk normally. Using unpigz instead of plain, old tar to get parallelization.

Session 2

inotifywait -r PATH -e close_write -m | mawk -Winteractive '/pgm$/ { print $1$3 }' | parallel -v -j10 'convert {} -quality 96 {.}.jpg && rm {}'

This is the secret sauce. I'm using inotifywait to tell me when any file is closed for writing in a subdirectory of PATH. Then I mawk it to only tell me when .pgm files are done being written, then I convert them to .jpg, and delete the .pgm when that's done. I'm using GNU Parallel to parallelize the image conversion. Otherwise the image conversion doesn't keep up.

This is going to take at least all day, but I'm reasonably confident that it will actually finish successfully, and I can then actually do stuff with the data.

Planet DebianAbhijith PA: Workstation setup

Workstation

Hello,

Recently I’ve seen lot of people sharing about their home office setup. I thought why don’t I do something similar. Not to beat FOMO, but in future when I revisit this blog, it will be lovely to understand that I had some cool stuffs.

There are people who went deep down in the ocean to lay cables for me to have a remote job and I am thankful to them.

Being remote my home is my office. On my work table I have a Samsung R439 laptop. I’ve blogged about it earlier. New addition is that it have another 4GB RAM, a total of 6GB and 120GB SSD. I run Debian testing on it. Laptop is placed on a stand. Dell MS116 as external mouse always connected to it. I also use an external keyboard from fingers. The keys are very stiff so I don’t recommend this to anyone. The only reason I took this keyboard that it is in my budget and have a backlit, which I needed most.

I have a Micromax MM215FH76 21 inch monitor as my secondary display which stacked up on couple of old books to adjust the height with laptop stand. Everything is ok with this monitor except that it don’t have a HDMI point and stand is very weak. I use i3wm and this small script help me to manage my monitor arrangement.

# samsung r439
xrandr --output LVDS1 --primary --mode 1366x768 --pos 1920x312 --rotate normal --output DP1 --off --output HDMI1 --off --output VGA1 --mode 1920x1080 --pos 0x0 --rotate normal --output VIRTUAL1 --off
# thinkpad t430s
#xrandr --output LVDS1 --primary --mode 1600x900 --pos 1920x180 --rotate normal --output DP1 --off --output DP2 --off --output DP3 --off --output HDMI1 --off --output HDMI2 --off --output HDMI3 --off --output VGA1 --mode 1920x1080 --pos 0x0 --rotate normal --output VIRTUAL1 --off
i3-msg workspace 2, move workspace to left
i3-msg workspace 4, move workspace to left
i3-msg workspace 6, move workspace to left

I also have another Viewsonic monitor 19 inch, it started to show some lines and unpleasant colors. Thus moved back to shelf.

I have an orange pi zero plus 2 running Armbian which serve as my emby media server. I don’t own any webcam or quality headset at the moment. I have a boat, and Mi, headphones. My laptop inbuilt webcam is horrible, so for my video conferencing need I use jitsi app on my mobile device.

Planet DebianReproducible Builds (diffoscope): diffoscope 152 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 152. This version includes the following changes:

[ Chris Lamb ]

* Bug fixes:

  - Don't require zipnote(1) to determine differences in a .zip file as we
    can use libarchive directly.

Reporting improvements:

  - Don't emit "javap not found in path" if it is available in the path but
    it did not result in any actual difference.
  - Fix "... not available in path" messages when looking for Java
    decompilers; we were using the Python class name (eg. "<class
    'diffoscope.comparators.java.Javap'>") over the actual command we looked
    for (eg. "javap").

* Code improvements:

  - Replace some simple usages of str.format with f-strings.
  - Tidy inline imports in diffoscope.logging.
  - In the RData comparator, always explicitly return a None value in the
    failure cases as we return a non-None value in the "success" one.

[ Jean-Romain Garnier ]
* Improve output of side-by-side diffs, detecting added lines better.
  (MR: reproducible-builds/diffoscope!64)
* Allow passing file with list of arguments to ArgumentParser (eg.
  "diffoscope @args.txt"). (MR: reproducible-builds/diffoscope!62)

You find out more by visiting the project homepage.

,

CryptogramFriday Squid Blogging: Squid Found on Provincetown Sandbar

Headline: "Dozens of squid found on Provincetown sandbar." Slow news day.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.900.2.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 757 other packages on CRAN.

Conrad just released a new minor upstream version 9.900.2 of Armadillo which we packaged and tested as usual first as a ‘release candidate’ build and then as the release. As usual, logs from reverse-depends runs are in the rcpp-logs repo.

All changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.900.2.0 (2020-07-17)

  • Upgraded to Armadillo release 9.900.2 (Nocturnal Misbehaviour)

    • In sort(), fixes for inconsistencies between checks applied to matrix and vector expressions

    • In sort(), remove unnecessary copying when applied in-place to vectors function when applied in-place to vectors

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramTwitter Hackers May Have Bribed an Insider

Motherboard is reporting that this week's Twitter hack involved a bribed insider. Twitter has denied it.

I have been taking press calls all day about this. And while I know everyone wants to speculate about the details of the hack, we just don't know -- and probably won't for a couple of weeks.

Dave HallIf You’re not Using YAML for CloudFormation Templates, You’re Doing it Wrong

In my last blog post, I promised a rant about using YAML for CloudFormation templates. Here it is. If you persevere to the end I’ll also show you have to convert your existing JSON based templates to YAML.

Many of the points I raise below don’t just apply to CloudFormation. They are general comments about why you should use YAML over JSON for configuration when you have a choice.

One criticism of YAML is its reliance on indentation. A lot of the code I write these days is Python, so indentation being significant is normal. Use a decent editor or IDE and this isn’t a problem. It doesn’t matter if you’re using JSON or YAML, you will want to validate and lint your files anyway. How else will you find that trailing comma in your JSON object?

Now we’ve got that out of the way, let me try to convince you to use YAML.

As developers we are regularly told that we need to document our code. CloudFormation is Infrastructure as Code. If it is code, then we need to document it. That starts with the Description property at the top of the file. If you JSON for your templates, that’s it, you have no other opportunity to document your templates. On the other hand, if you use YAML you can add inline comments. Anywhere you need a comment, drop in a hash # and your comment. Your team mates will thank you.

JSON templates don’t support multiline strings. These days many developers have 4K or ultra wide monitors, we don’t want a string that spans the full width of our 34” screen. Text becomes harder to read once you exceed that “90ish” character limit. With JSON your multiline string becomes "[90ish-characters]\n[another-90ish-characters]\n[and-so-on"]. If you opt for YAML, you can use the greater than symbol (>) and then start your multiline comment like so:

Description: >
  This is the first line of my Description
  and it continues on my second line
  and I'll finish it on my third line.

As you can see it much easier to work with multiline string in YAML than JSON.

“Folded blocks” like the one above are created using the > replace new lines with spaces. This allows you to format your text in a more readable format, but allow a machine to use it as intended. If you want to preserve the new line, use the pipe (|) to create a “literal block”. This is great for an inline Lambda functions where the code remains readable and maintainable.

  APIFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile: |
          import json
          import random


          def lambda_handler(event, context):
              return {"statusCode": 200, "body": json.dumps({"value": random.random()})}
      FunctionName: "GetRandom"
      Handler: "index.lambda_handler"
      MemorySize: 128
      Role: !GetAtt LambdaServiceRole.Arn
      Runtime: "python3.7"
		Timeout: 5

Both JSON and YAML require you to escape multibyte characters. That’s less of an issue with CloudFormation templates as generally you’re only using the ASCII character set.

In a YAML file generally you don’t need to quote your strings, but in JSON double quotes are used every where, keys, string values and so on. If your string contains a quote you need to escape it. The same goes for tabs, new lines, backslashes and and so on. JSON based CloudFormation templates can be hard to read because of all the escaping. It also makes it harder to handcraft your JSON when your code is a long escaped string on a single line.

Some configuration in CloudFormation can only be expressed as JSON. Step Functions and some of the AppSync objects in CloudFormation only allow inline JSON configuration. You can still use a YAML template and it is easier if you do when working with these objects.

The JSON only configuration needs to be inlined in your template. If you’re using JSON you have to supply this as an escaped string, rather than nested objects. If you’re using YAML you can inline it as a literal block. Both YAML and JSON templates support functions such as Sub being applied to these strings, it is so much more readable with YAML. See this Step Function example lifted from the AWS documentation:

MyStateMachine:
  Type: "AWS::StepFunctions::StateMachine"
  Properties:
    DefinitionString:
      !Sub |
        {
          "Comment": "A simple AWS Step Functions state machine that automates a call center support session.",
          "StartAt": "Open Case",
          "States": {
            "Open Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:open_case",
              "Next": "Assign Case"
            }, 
            "Assign Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:assign_case",
              "Next": "Work on Case"
            },
            "Work on Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:work_on_case",
              "Next": "Is Case Resolved"
            },
            "Is Case Resolved": {
                "Type" : "Choice",
                "Choices": [ 
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 1,
                    "Next": "Close Case"
                  },
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 0,
                    "Next": "Escalate Case"
                  }
              ]
            },
             "Close Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:close_case",
              "End": true
            },
            "Escalate Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:escalate_case",
              "Next": "Fail"
            },
            "Fail": {
              "Type": "Fail",
              "Cause": "Engage Tier 2 Support."    }   
          }
        }

If you’re feeling lazy you can use inline JSON for IAM policies that you’ve copied from elsewhere. It’s quicker than converting them to YAML.

YAML templates are smaller and more compact than the same configuration stored in a JSON based template. Smaller yet more readable is winning all round in my book.

If you’re still not convinced that you should use YAML for your CloudFormation templates, go read Amazon’s blog post from 2017 advocating the use of YAML based templates.

Amazon makes it easy to convert your existing templates from JSON to YAML. cfn-flip is aPython based AWS Labs tool for converting CloudFormation templates between JSON and YAML. I will assume you’ve already installed cfn-flip. Once you’ve done that, converting your templates with some automated cleanups is just a command away:

cfn-flip --clean template.json template.yaml

git rm the old json file, git add the new one and git commit and git push your changes. Now you’re all set for your new life using YAML based CloudFormation templates.

If you want to learn more about YAML files in general, I recommend you check our Learn X in Y Minutes’ Guide to YAML. If you want to learn more about YAML based CloudFormation templates, check Amazon’s Guide to CloudFormation Templates.

LongNowLong Now partners with Avenues: The World School for year-long, online program on the future of invention

The best way to predict the future is to invent it.” – Alan Kay

The Long Now Foundation has partnered with Avenues: The World School to offer a program on the past, present, and future of innovation. A fully online program for ages 17 and above, the Avenues Mastery Year is designed to equip aspiring inventors with the ability to: 

  • Conceive original ideas and translate those ideas into inventions through design and prototyping, 
  • Communicate the impact of the invention with an effective pitch deck and business plan, 
  • Ultimately file for and receive patent pending status with the United States Patent and Trademark Office. 

Applicants select a concentration in either Making and Design or Future Sustainability

Participants will hack, reverse engineer, and re-invent a series of world-changing technologies such as the smartphone, bioplastics, and the photovoltaic cell, all while immersing themselves in curated readings about the origins and possible trajectories of great inventions. 

The Long Now Foundation will host monthly fireside chats for participants where special guests offer feedback, spark new ideas and insights, and share advice and wisdom. Confirmed guests include Kim Polese (Long Now Board Member), Alexander Rose (Long Now Executive Director and Board Member), Samo Burja (Long Now Research Fellow), Jason Crawford (Roots of Progress), and Nick Pinkston (Volition). Additional guests from the Long Now Board and community are being finalized over the coming weeks.

The goal of Avenues Mastery Year is to equip aspiring inventors with the technical skills and long-term perspective needed to envision and invent the future. Visit Avenues Mastery Year to learn more, or get in touch directly by writing to ama@avenues.org.

Worse Than FailureError'd: Not Applicable

"Why yes, I have always pictured myself as not applicable," Olivia T. wrote.

 

"Hey Amazon, now I'm no doctor, but you may need to reconsider your 'Choice' of Acetaminophen as a 'stool softener'," writes Peter.

 

Ivan K. wrote, "Initially, I balked at the price of my new broadband plan, but the speed is just so good that sometimes it's so fast that the reply packets arrive before I even send a request!"

 

"I wanted to check if a site was being slow and, well, I figured it was good time to go read a book," Tero P. writes.

 

Robin L. writes, "I just can't wait to try Edge!"

 

"Yeah, one car stays in the garage, the other is out there tailgating Starman," Keith wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityWho’s Behind Wednesday’s Epic Twitter Hack?

Twitter was thrown into chaos on Wednesday after accounts for some of the world’s most recognizable public figures, executives and celebrities starting tweeting out links to bitcoin scams. Twitter says the attack happened because someone tricked or coerced an employee into providing access to internal Twitter administrative tools. This post is an attempt to lay out some of the timeline of the attack, and point to clues about who may have been behind it.

The first public signs of the intrusion came around 3 PM EDT, when the Twitter account for the cryptocurrency exchange Binance tweeted a message saying it had partnered with “CryptoForHealth” to give back 5000 bitcoin to the community, with a link where people could donate or send money.

Minutes after that, similar tweets went out from the accounts of other cryptocurrency exchanges, and from the Twitter accounts for democratic presidential candidate Joe Biden, Amazon CEO Jeff Bezos, President Barack Obama, Tesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

While it may sound ridiculous that anyone would be fooled into sending bitcoin in response to these tweets, an analysis of the BTC wallet promoted by many of the hacked Twitter profiles shows that over the past 24 hours the account has processed 383 transactions and received almost 13 bitcoin — or approximately USD $117,000.

Twitter issued a statement saying it detected “a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools. We know they used this access to take control of many highly-visible (including verified) accounts and Tweet on their behalf. We’re looking into what other malicious activity they may have conducted or information they may have accessed and will share more here as we have it.”

There are strong indications that this attack was perpetrated by individuals who’ve traditionally specialized in hijacking social media accounts via “SIM swapping,” an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

People within the SIM swapping community are obsessed with hijacking so-called “OG” social media accounts. Short for “original gangster,” OG accounts typically are those with short profile names (such as @B or @joe). Possession of these OG accounts confers a measure of status and perceived influence and wealth in SIM swapping circles, as such accounts can often fetch thousands of dollars when resold in the underground.

In the days leading up to Wednesday’s attack on Twitter, there were signs that some actors in the SIM swapping community were selling the ability to change an email address tied to any Twitter account. In a post on OGusers — a forum dedicated to account hijacking — a user named “Chaewon” advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

“This is NOT a method, you will be given a full refund if for any reason you aren’t given the email/@, however if it is revered/suspended I will not be held accountable,” Chaewon wrote in their sales thread, which was titled “Pulling email for any Twitter/Taking Requests.”

Hours before any of the Twitter accounts for cryptocurrency platforms or public figures began blasting out bitcoin scams on Wednesday, the attackers appear to have focused their attention on hijacking a handful of OG accounts, including “@6.

That Twitter account was formerly owned by Adrian Lamo — the now-deceased “homeless hacker” perhaps best known for breaking into the New York Times’s network and for reporting Chelsea Manning‘s theft of classified documents. @6 is now controlled by Lamo’s longtime friend, a security researcher and phone phreaker who asked to be identified in this story only by his Twitter nickname, “Lucky225.”

Lucky225 said that just before 2 p.m. EDT on Wednesday, he received a password reset confirmation code via Google Voice for the @6 Twitter account. Lucky said he’d previously disabled SMS notifications as a means of receiving multi-factor codes from Twitter, opting instead to have one-time codes generated by a mobile authentication app.

But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication, the one-time authentication code was sent to both his Google Voice account and to the new email address added by the attackers.

“The way the attack worked was that within Twitter’s admin tools, apparently you can update the email address of any Twitter user, and it does this without sending any kind of notification to the user,” Lucky told KrebsOnSecurity. “So [the attackers] could avoid detection by updating the email address on the account first, and then turning off 2FA.”

Lucky said he hasn’t been able to review whether any tweets were sent from his account during the time it was hijacked because he still doesn’t have access to it (he has put together a breakdown of the entire episode at this Medium post).

But around the same time @6 was hijacked, another OG account – @B — was swiped. Someone then began tweeting out pictures of Twitter’s internal tools panel showing the @B account.

A screenshot of the hijacked OG Twitter account “@B,” shows the hijackers logged in to Twitter’s internal account tools interface.

Twitter responded by removing any tweets across its platform that included screenshots of its internal tools, and in some cases temporarily suspended the ability of those accounts to tweet further.

Another Twitter account — @shinji — also was tweeting out screenshots of Twitter’s internal tools. Minutes before Twitter terminated the @shinji account, it was seen publishing a tweet saying “follow @6,” referring to the account hijacked from Lucky225.

The account “@shinji” tweeting a screenshot of Twitter’s internal tools interface.

Cached copies of @Shinji’s tweets prior to Wednesday’s attack on Twitter are available here and here from the Internet Archive. Those caches show Shinji claims ownership of two OG accounts on Instagram — “j0e” and “dead.”

KrebsOnSecurity heard from a source who works in security at one of the largest U.S.-based mobile carriers, who said the “j0e” and “dead” Instagram accounts are tied to a notorious SIM swapper who goes by the nickname “PlugWalkJoe.” Investigators have been tracking PlugWalkJoe because he is thought to have been involved in multiple SIM swapping attacks over the years that preceded high-dollar bitcoin heists.

Archived copies of the @Shinji account on twitter shows one of Joe’s OG Instagram accounts, “Dead.”

Now look at the profile image in the other Archive.org index of the @shinji Twitter account (pictured below). It is the same image as the one included in the @Shinji screenshot above from Wednesday in which Joseph/@Shinji was tweeting out pictures of Twitter’s internal tools.

Image: Archive.org

This individual, the source said, was a key participant in a group of SIM swappers that adopted the nickname “ChucklingSquad,” and was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As Wired.com recounted, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

A tweet sent out from Twitter CEO Jack Dorsey’s account while it was hijacked shouted out to PlugWalkJoe and other Chuckling Squad members.

The mobile industry security source told KrebsOnSecurity that PlugWalkJoe in real life is a 21-year-old from Liverpool, U.K. named Joseph James O’Connor. The source said PlugWalkJoe is in Spain where he was attending a university until earlier this year. He added that PlugWalkJoe has been unable to return home on account of travel restrictions due to the COVID-19 pandemic.

The mobile industry source said PlugWalkJoe was the subject of an investigation in which a female investigator was hired to strike up a conversation with PlugWalkJoe and convince him to agree to a video chat. The source further explained that a video which they recorded of that chat showed a distinctive swimming pool in the background.

According to that same source, the pool pictured on PlugWalkJoe’s Instagram account (instagram.com/j0e) is the same one they saw in their video chat with him.

If PlugWalkJoe was in fact pivotal to this Twitter compromise, it’s perhaps fitting that he was identified in part via social engineering. Maybe we should all be grateful the perpetrators of this attack on Twitter did not set their sights on more ambitious aims, such as disrupting an election or the stock market, or attempting to start a war by issuing false, inflammatory tweets from world leaders.

Also, it seems clear that this Twitter hack could have let the attackers view the direct messages of anyone on Twitter, information that is difficult to put a price on but which nevertheless would be of great interest to a variety of parties, from nation states to corporate spies and blackmailers.

This is a fast-moving story. There were multiple people involved in the Twitter heist. Please stay tuned for further updates. KrebsOnSecurity would like to thank Unit 221B for their assistance in connecting some of the dots in this story.

Worse Than FailureCodeSOD: Because of the Implication

Even when you’re using TypeScript, you’re still bound by JavaScript’s type system. You’re also stuck with its object system, which means that each object is really just a dict, and there’s no guarantee that any object has any given key at runtime.

Madison sends us some TypeScript code that is, perhaps not strictly bad, in and of itself, though it certainly contains some badness. It is more of a symptom. It implies a WTF.

    private _filterEmptyValues(value: any): any {
        const filteredValue = {};
        Object.keys(value)
            .filter(key => {
                const v = value[key];

                if (v === null) {
                    return false;
                }
                if (v.von !== undefined || v.bis !== undefined) {
                    return (v.von !== null && v.von !== 'undefined' && v.von !== '') ||
                        (v.bis !== null && v.bis !== 'undefined' && v.bis !== '');
                }
                return (v !== 'undefined' && v !== '');

            }).forEach(key => {
            filteredValue[key] = value[key];
        });
        return filteredValue;
    }

At a guess, this code is meant to be used as part of prepping objects for being part of a request: clean out unused keys before sending or storing them. And as a core methodology, it’s not wrong, and it’s pretty similar to your standard StackOverflow solution to the problem. It’s just… forcing me to ask some questions.

Let’s trace through it. We start by doing an Object.keys to get all the fields on the object. We then filter to remove the “empty” ones.

First, if the value is null, that’s empty. That makes sense.

Then, if the value is an object which contains a von or bis property, we’ll do some more checks. This is a weird definition of “empty”, but fine. We’ll check that they’re both non-null, not an empty string, and not… 'undefined'.

Uh oh.

We then do a similar check on the value itself, to ensure it’s not an empty string, and not 'undefined'.

What this is telling me is that somewhere in processing, sometimes, the actual string “undefined” can be stored, and it’s meant to be treated as JavaScript’s type undefined. That probably shouldn’t be happening, and implies a WTF somewhere else.

Similarly, the von and bis check has to raise a few eyebrows. If an object contains these fields, these fields must contain a value to pass this check. Why? I have no idea.

In the end, this code isn’t the WTF itself, it’s all the questions that it raises that tell me the shape of the WTF. It’s like looking at a black hole: I can’t see the object itself, I can only see the effect it has on the space around it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

CryptogramNSA on Securing VPNs

The NSA's Central Security Service -- that's the part that's supposed to work on defense -- has released two documents (a full and an abridged version) on securing virtual private networks. Some of it is basic, but it contains good information.

Maintaining a secure VPN tunnel can be complex and requires regular maintenance. To maintain a secure VPN, network administrators should perform the following tasks on a regular basis:

  • Reduce the VPN gateway attack surface
  • Verify that cryptographic algorithms are Committee on National Security Systems Policy (CNSSP) 15-compliant
  • Avoid using default VPN settings
  • Remove unused or non-compliant cryptography suites
  • Apply vendor-provided updates (i.e. patches) for VPN gateways and clients

Worse Than FailureCodeSOD: Dates by the Dozen

Before our regularly scheduled programming, Code & Supply, a developer community group we've collaborated with in the past, is running a salary survey, to gauge the state of the industry. More responses are always helpful, so I encourage you to take a few minutes and pitch in.

Cid was recently combing through an inherited Java codebase, and it predates Java 8. That’s a fancy way of saying “there were no good date builtins, just a mess of cruddy APIs”. That’s not to say that there weren’t date builtins prior to Java 8- they were just bad.

Bad, but better than this. Cid sent along a lot of code, and instead of going through it all, let’s get to some of the “highlights”. Much of this is stuff we’ve seen variations on before, but have been combined in ways to really elevate the badness. There are dozens of these methods, which we are only going to look at a sample of.

Let’s start with the String getLocalDate() method, which attempts to construct a timestamp in the form yyyyMMdd. As you can already predict, it does a bunch of string munging to get there, with blocks like:

switch (calendar.get(Calendar.MONTH)){
      case Calendar.JANUARY:
        sb.append("01");
        break;
      case Calendar.FEBRUARY:
        sb.append("02");
        break;
      …
}

Plus, we get the added bonus of one of those delightful “how do I pad an integer out to two digits?” blocks:

if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
  sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
}
else {
  sb.append(calendar.get(Calendar.DAY_OF_MONTH));
}

Elsewhere, they expect a timestamp to be in the form yyyyMMddHHmmssZ, so they wrote a handy void checkTimestamp method. Wait, void you say? Shouldn’t it be boolean?

Well here’s the full signature:

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Why return a boolean when you can throw an exception on bad input? Unless the bad input is a null, in which case:

if (timestamp == null) {
  return;
}

Nulls are valid timestamps, which is useful to know. We next get a lovely block of checking each character to ensure that they’re digits, and a second check to ensure that the last is the letter Z, which turns out to be double work, since the very next step is:

int year = Integer.parseInt(timestamp.substring(0,4));
int month = Integer.parseInt(timestamp.substring(4,6));
int day = Integer.parseInt(timestamp.substring(6,8));
int hour = Integer.parseInt(timestamp.substring(8,10));
int minute = Integer.parseInt(timestamp.substring(10,12));
int second = Integer.parseInt(timestamp.substring(12,14));

Followed by a validation check for day and month:

if (day < 1) {
  throw new IOException(msg);
}
if ((month < 1) || (month > 12)) {
  throw new IOException(msg);
}
if (month == 2) {
  if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
    if (day > 29) {
      throw new IOException(msg);
    }
  }
  else {
    if (day > 28) {
      throw new IOException(msg);
  }
  }
}
if (month == 1 || month == 3 || month == 5 || month == 7
|| month == 8 || month == 10 || month == 12) {
  if (day > 31) {
    throw new IOException(msg);
  }
}
if (month == 4 || month == 6 || month == 9 || month == 11) {
  if (day > 30) {
    throw new IOException(msg);
  }
"

The upshot is they at least got the logic right.

What’s fun about this is that the original developer never once considered “maybe I need an intermediate data structure beside a string to manipulate dates”. Nope, we’re just gonna munge that string all day. And that is our entire plan for all date operations, which brings us to the real exciting part, where this transcends from “just regular old bad date code” into full on WTF territory.

Would you like to see how they handle adding units of time? Like days?

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

The only thing I can say is that here they realized that “hey, wait, maybe I can modularize” and figured out how to stuff their “how many days are in a month” logic into getDaysOfMonth, which you can see invoked above.

Beyond that, they manually handle carrying, and never once pause to think, “hey, maybe there’s a better way”.

And speaking of repeating code, guess what- there’s also a public static String additionOfSeconds(String timestamp, int intervall) method, too.

There are dozens of similar methods, Cid has only provided us a sample. Cid adds:

This particular developer didn’t trust in too fine modularization and code reusing (DRY!). So for every of this dozen of methods, he has implemented these date parsing/formatting algorithms again and again! And no, not just copy/paste; every time it is a real wheel-reinvention. The code blocks and the position of single code lines look different for every method.

Once Cid got too frustrated by this code, they went and reimplemented it in modern Java date APIs, shrinking the codebase by hundreds of lines.

The full blob of code Cid sent in follows, for your “enjoyment”:

public static String getLocalDate() {
  TimeZone tz = TimeZone.getDefault();
  GregorianCalendar calendar = new GregorianCalendar(tz);
  calendar.setTime(new Date());
  StringBuffer sb = new StringBuffer();
  sb.append(calendar.get(Calendar.YEAR));
  switch (calendar.get(Calendar.MONTH)){
    case Calendar.JANUARY:
      sb.append("01");
      break;
    case Calendar.FEBRUARY:
      sb.append("02");
      break;
    case Calendar.MARCH:
      sb.append("03");
      break;
    case Calendar.APRIL:
      sb.append("04");
      break;
    case Calendar.MAY:
      sb.append("05");
      break;
    case Calendar.JUNE:
      sb.append("06");
      break;
    case Calendar.JULY:
      sb.append("07");
      break;
    case Calendar.AUGUST:
      sb.append("08");
      break;
    case Calendar.SEPTEMBER:
      sb.append("09");
      break;
    case Calendar.OCTOBER:
      sb.append("10");
      break;
    case Calendar.NOVEMBER:
      sb.append("11");
      break;
    case Calendar.DECEMBER:
      sb.append("12");
      break;
  }
  if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
    sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
  }
  else {
    sb.append(calendar.get(Calendar.DAY_OF_MONTH));
  }
  return sb.toString();
}

public static void checkTimestamp(String timestamp, String name)
throws IOException {
  if (timestamp == null) {
    return;
  }
  String msg = new String(
      "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
  int len = timestamp.length();
  if (len != 15) {
    throw new IOException(msg);
  }
  for (int i = 0; i < (len - 1); i++) {
    if (! Character.isDigit(timestamp.charAt(i))) {
      throw new IOException(msg);
    }
  }
  if (timestamp.charAt(len - 1) != 'Z') {
    throw new IOException(msg);
  }
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  if (day < 1) {
    throw new IOException(msg);
  }
  if ((month < 1) || (month > 12)) {
    throw new IOException(msg);
  }
  if (month == 2) {
    if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
      if (day > 29) {
        throw new IOException(msg);
      }
    }
    else {
      if (day > 28) {
        throw new IOException(msg);
    }
    }
  }
  if (month == 1 || month == 3 || month == 5 || month == 7
  || month == 8 || month == 10 || month == 12) {
    if (day > 31) {
      throw new IOException(msg);
    }
  }
  if (month == 4 || month == 6 || month == 9 || month == 11) {
    if (day > 30) {
      throw new IOException(msg);
    }
  }
  if ((hour < 0) || (hour > 24)) {
    throw new IOException(msg);
  }
  if ((minute < 0) || (minute > 59)) {
    throw new IOException(msg);
  }
  if ((second < 0) || (second > 59)) {
    throw new IOException(msg);
  }
}

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

public static String additionOfSeconds(String timestamp, int intervall) {
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  int new_second = (second + intervall) % 60;
  int minute_intervall = (second + intervall) / 60;
  int new_minute = (minute + minute_intervall) % 60;
  int hour_intervall = (minute + minute_intervall) / 60;
  int new_hour = (hour + hour_intervall) % 24;
  int day_intervall = (hour + hour_intervall) / 24;
  StringBuffer new_time = new StringBuffer();
  if (new_hour < 10) {
    new_time.append("0" + new_hour + "");
  }
  else {
    new_time.append("" + new_hour + "");
  }
  if (new_minute < 10) {
    new_time.append("0" + new_minute + "");
  }
  else {
    new_time.append("" + new_minute + "");
  }
  if (new_second < 10) {
    new_time.append("0" + new_second + "");
  }
  else {
    new_time.append("" + new_second + "");
  }
  if (day_intervall > 0) {
    return additionOfDays(timestamp.substring(0,8) + new_time.toString() + "Z", day_intervall);
  }
  else {
    return (timestamp.substring(0,8) + new_time.toString() + "Z");
  }
}

public static int getDaysOfMonth(int year, int month) {
  int lastDayOfMonth = 31;
  switch (month) {
    case 1: case 3: case 5: case 7: case 8: case 10: case 12:
      lastDayOfMonth = 31;
      break;
    case 2:
      if ((year % 4 == 0 && year % 100 != 0) || year %400 == 0) {
        lastDayOfMonth = 29;
      }
      else {
        lastDayOfMonth = 28;
      }
      break;
    case 4: case 6: case 9: case 11:
      lastDayOfMonth = 30;
      break;
  }
  return lastDayOfMonth;
}
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

CryptogramEFF's 30th Anniversary Livestream

It's the EFF's 30th birthday, and the organization is having a celebratory livestream today from 3:00 to 10:00 pm PDT.

There are a lot of interesting discussions and things. I am having a fireside chat at 4:10 pm PDT to talk about the Crypto Wars and more.

Stop by. And thank you for supporting EFF.

EDITED TO ADD: This event is over, but you can watch a recorded version on YouTube.

,

Krebs on Security‘Wormable’ Flaw Leads July Microsoft Patches

Microsoft today released updates to plug a whopping 123 security holes in Windows and related software, including fixes for a critical, “wormable” flaw in Windows Server versions that Microsoft says is likely to be exploited soon. While this particular weakness mainly affects enterprises, July’s care package from Redmond has a little something for everyone. So if you’re a Windows (ab)user, it’s time once again to back up and patch up (preferably in that order).

Top of the heap this month in terms of outright scariness is CVE-2020-1350, which concerns a remotely exploitable bug in more or less all versions of Windows Server that attackers could use to install malicious software simply by sending a specially crafted DNS request.

Microsoft said it is not aware of reports that anyone is exploiting the weakness (yet), but the flaw has been assigned a CVSS score of 10, which translates to “easy to attack” and “likely to be exploited.”

“We consider this to be a wormable vulnerability, meaning that it has the potential to spread via malware between vulnerable computers without user interaction,” Microsoft wrote in its documentation of CVE-2020-1350. “DNS is a foundational networking component and commonly installed on Domain Controllers, so a compromise could lead to significant service interruptions and the compromise of high level domain accounts.”

CVE-2020-1350 is just the latest worry for enterprise system administrators in charge of patching dangerous bugs in widely-used software. Over the past couple of weeks, fixes for flaws with high severity ratings have been released for a broad array of software products typically used by businesses, including Citrix, F5, Juniper, Oracle and SAP. This at a time when many organizations are already short-staffed and dealing with employees working remotely thanks to the COVID-19 pandemic.

The Windows Server vulnerability isn’t the only nasty one addressed this month that malware or malcontents can use to break into systems without any help from users. A full 17 other critical flaws fixed in this release tackle security weaknesses that Microsoft assigned its most dire “critical” rating, such as in Office, Internet Exploder, SharePoint, Visual Studio, and Microsoft’s .NET Framework.

Some of the more eyebrow-raising critical bugs addressed this month include CVE-2020-1410, which according to Recorded Future concerns the Windows Address Book and could be exploited via a malicious vcard file. Then there’s CVE-2020-1421, which protects against potentially malicious .LNK files (think Stuxnet) that could be exploited via an infected removable drive or remote share. And we have the dynamic duo of CVE-2020-1435 and CVE-2020-1436, which involve problems with the way Windows handles images and fonts that could both be exploited to install malware just by getting a user to click a booby-trapped link or document.

Not to say flaws rated “important” as opposed to critical aren’t also a concern. Chief among those is CVE-2020-1463, a problem within Windows 10 and Server 2016 or later that was detailed publicly prior to this month’s Patch Tuesday.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a particular Windows update to hose one’s system or prevent it from booting properly, and some updates even have been known to erase or corrupt files. Last month’s bundle of joy from Microsoft sent my Windows 10 system into a perpetual crash state. Thankfully, I was able to restore from a recent backup.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Also, keep in mind that Windows 10 is set to apply patches on its own schedule, which means if you delay backing up you could be in for a wild ride. If you wish to ensure the operating system has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches whenever it sees fit, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Also, keep an eye on the AskWoody blog from Woody Leonhard, who keeps a reliable lookout for buggy Microsoft updates each month.

CryptogramEnigma Machine for Sale

A four-rotor Enigma machine -- with rotors -- is up for auction.

CryptogramHalf a Million IoT Passwords Leaked

It is amazing that this sort of thing can still happen:

...the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Telnet? Default passwords? In 2020?

We have a long way to go to secure the IoT.

EDITED TO ADD (7/14): Apologies, but I previously blogged this story in January.

Worse Than FailureRepresentative Line: An Exceptional Leader

IniTech’s IniTest division makes a number of hardware products, like a protocol analyzer which you can plug into a network and use to monitor data in transport. As you can imagine, it involves a fair bit of software, and it involves a fair bit of hardware. Since it’s a testing and debugging tool, reliability, accuracy, and stability are the watchwords of the day.

Which is why the software development process was overseen by Russel. Russel was the “Alpha Geek”, blessed by the C-level to make sure that the software was up to snuff. This lead to some conflict- Russel had a bad habit of shoulder-surfing his fellow developers and telling them what to type- but otherwise worked very well. Foibles aside, Russel was technically competent, knew the problem domain well, and had a clean, precise, and readable coding style which all the other developers tried to imitate.

It was that last bit which got Ashleigh’s attention. Because, scattered throughout the entire C# codebase, there are exception handlers which look like this:

try
{
	// some code, doesn't matter what
	// ...
}
catch (Exception ex)
{
   ex = ex;
}

This isn’t the sort of thing which one developer did. Nearly everyone on the team had a commit like that, and when Ashleigh asked about it, she was told “It’s just a best practice. We’re following Russel’s lead. It’s for debugging.”

Ashleigh asked Russel about it, but he just grumbled and had no interest in talking about it beyond, “Just… do it if it makes sense to you, or ignore it. It’s not necessary.”

If it wasn’t necessary, why was it so common in the codebase? Why was everyone “following Russel’s lead”?

Ashleigh tracked down the original commit which started this pattern. It was made by Russel, but the exception handler had one tiny, important difference:

catch (Exception ex)
{
   ex = ex; //putting this here to set a breakpoint
}

Yes, this was just a bit of debugging code. It was never meant to be committed. Russel pushed it into the main history by accident, and the other developers saw it, and thought to themselves, “If Russel does it, it must be the right thing to do,” and started copying him.

By the time Russel noticed what was going on, it was too late. The standard had been set while he wasn’t looking, and whether it was ego or cowardice, Russel just could never get the team to follow his lead away from the pointless pattern.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

MEDebian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked – it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel.

Here are the instructions on how to do it.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp

Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

apt install qemu-system-ppc qemu-user-static

update-binfmts --display

# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1

debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

echo ppc64 > /mnt/tmp/etc/hostname

# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1

chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd

cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys

rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.

#!/bin/bash
set -e

KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"

# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"

# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"

# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"

kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET

,

Krebs on SecurityBreached Data Indexer ‘Data Viper’ Hacked

Data Viper, a security startup that provides access to some 15 billion usernames, passwords and other information exposed in more than 8,000 website breaches, has itself been hacked and its user database posted online. The hackers also claim they are selling on the dark web roughly 2 billion records Data Viper collated from numerous breaches and data leaks, including data from several companies that likely either do not know they have been hacked or have not yet publicly disclosed an intrusion.

The apparent breach at St. Louis, Mo. based Data Viper offers a cautionary and twisted tale of what can happen when security researchers seeking to gather intelligence about illegal activity online get too close to their prey or lose sight of their purported mission. The incident also highlights the often murky area between what’s legal and ethical in combating cybercrime.

Data Viper is the brainchild of Vinny Troia, a security researcher who runs a cyber threat intelligence company called Night Lion Security. Since its inception in 2018, Data Viper has billed itself as a “threat intelligence platform designed to provide organizations, investigators and law enforcement with access to the largest collection of private hacker channels, pastes, forums and breached databases on the market.”

Many private companies sell access to such information to vetted clients — mainly law enforcement officials and anti-fraud experts working in security roles at major companies that can foot the bill for these often pricey services.

Data Viper has sought to differentiate itself by advertising “access to private and undisclosed breach data.” As KrebsOnSecurity noted in a 2018 story, Troia has acknowledged posing as a buyer or seller on various dark web forums as a way to acquire old and newly-hacked databases from other forum members.

But this approach may have backfired over the weekend, when someone posted to the deep web a link to an “e-zine” (electronic magazine) describing the Data Viper hack and linking to the Data Viper user base. The anonymous poster alleged he’d been inside Data Viper for months and had exfiltrated hundreds of gigabytes of breached data from the service without notice.

The intruder also linked to several dozen new sales threads on the dark web site Empire Market, where they advertise the sale of hundreds of millions of account details from dozens of leaked or hacked website databases that Data Viper allegedly acquired via trading with others on cybercrime forums.

An online post by the attackers who broke into Data Viper.

Some of the databases for sale tie back to known, publicly reported breaches. But others correspond to companies that do not appear to have disclosed a security incident. As such, KrebsOnSecurity is not naming most of those companies and is currently attempting to ascertain the validity of the claims.

KrebsOnSecurity did speak with Victor Ho, the CEO of Fivestars.com, a company that helps smaller firms run customer loyalty programs. The hackers claimed they are selling 44 million records taken from Fivestars last year. Ho said he was unaware of any data security incident and that no such event had been reported to his company, but that Fivestars is now investigating the claims. Ho allowed that the number of records mentioned in the dark web sales thread roughly matches the number of users his company had last year.

But on Aug. 3, 2019, Data Viper’s Twitter account casually noted, “FiveStars — 44m breached records added – incl Name, Email, DOB.” The post, buried among a flurry of similar statements about huge caches of breached personal information added to Data Viper, received hardly any attention and garnered just one retweet.

GNOSTIC PLAYERS, SHINY HUNTERS

Reached via Twitter, Troia acknowledged that his site had been hacked, but said the attackers only got access to the development server for Data Viper, and not the more critical production systems that power the service and which house his index of compromised credentials.

Troia said the people responsible for compromising his site are the same people who hacked the databases they are now selling on the dark web and claiming to have obtained exclusively from his service.

What’s more, Troia believes the attack was a preemptive strike in response to a keynote he’s giving in Boston this week: On June 29, Troia tweeted that he plans to use the speech to publicly expose the identities of the hackers, who he suspects are behind a large number of website break-ins over the years.

Hacked or leaked credentials are prized by cybercriminals engaged in “credential stuffing,” a rampant form of cybercrime that succeeds when people use the same passwords across multiple websites. Armed with a list of email addresses and passwords from a breached site, attackers will then automate login attempts using those same credentials at hundreds of other sites.

Password re-use becomes orders of magnitude more dangerous when website developers engage in this unsafe practice. Indeed, a January 2020 post on the Data Viper blog suggests credential stuffing is exactly how the group he plans to discuss in his upcoming talk perpetrated their website compromises.

In that post, Troia wrote that the hacker group, known variously as “Gnostic Players” and “Shiny Hunters,” plundered countless website databases using roughly the same method: Targeting developers using credential stuffing attacks to log into their GitHub accounts.

“While there, they would pillage the code repositories, looking for AWS keys and similar credentials that were checked into code repositories,” Troia wrote.

Troia said the intrusion into his service wasn’t the result of the credential re-use, but instead because his developer accidentally left his credentials exposed in documents explaining how customers can use Data Viper’s application programming interface.

“I will say the irony of how they got in is absolutely amazing,” Troia said. “But all of this stuff they claim to be selling is [databases] they were already selling. All of this is from Gnostic players. None of it came from me. It’s all for show to try and discredit my report and my talk.”

Troia said he didn’t know how many of the databases Gnostic Players claimed to have obtained from his site were legitimate hacks or even public yet.

“As for public reporting on the databases, a lot of that will be in my report Wednesday,” he said. “All of my ‘reporting’ goes to the FBI.”

SMOKE AND MIRRORS

The e-zine produced by the Data Viper hackers claimed that Troia used many nicknames on various cybercrime forums, including the moniker “Exabyte” on OGUsers, a forum that’s been closely associated with account takeovers.

In a conversation with KrebsOnSecurity, Troia acknowledged that this Exabyte attribution was correct, noting that he was happy about the exposure because it further solidified his suspicions about who was responsible for hacking his site.

This is interesting because some of the hacked databases the intruders claimed to have acquired after compromising Data Viper correspond to discoveries credited to Troia in which companies inadvertently exposed tens of millions of user details by leaving them publicly accessible online at cloud services like Amazon’s EC2.

For example, in March 2019, Troia said he’d co-discovered a publicly accessible database containing 150 gigabytes of plaintext marketing data — including 763 million unique email addresses. The data had been exposed online by Verifications.io, an email validation firm.

On Oct 12, 2019, a new user named Exabyte registered on RaidForums — a site dedicated to sharing hacked databases and tools to perpetrate credential stuffing attacks. That Exabyte account was registered less than two weeks after Troia created his Exabyte identity on OGUsers. The Exabyte on RaidForums posted on Dec. 26, 2019 that he was providing the community with something of a belated Christmas present: 200 million accounts leaked from Verifications.io.

“Verifications.io is finally here!” Exabyte enthused. “This release contains 69 of 70 of the original verifications.io databases, totaling 200+ million accounts.”

Exabyte’s offer of the Verifications.io database on RaidForums.

In May 2018, Troia was featured in Wired.com and many other publications after discovering that sales intelligence firm Apollo left 125 million email addresses and nine billion data points publicly exposed in a cloud service. As I reported in 2018, prior to that disclosure Troia had sought my help in identifying the source of the exposed data, which he’d initially and incorrectly concluded was exposed by LinkedIn.com. Rather, Apollo had scraped and collated the data from many different sites, including LinkedIn.

Then in August 2018, someone using the nickname “Soundcard” posted a sales thread to the now-defunct Kickass dark web forum offering the personal information of 212 million LinkedIn users in exchange for two bitcoin (then the equivalent of ~$12,000 USD). Incredibly, Troia had previously told me that he was the person behind that Soundcard identity on the Kickass forum.

Soundcard, a.k.a. Troia, offering to sell what he claimed was all of LinkedIn’s user data, on the Dark Web forum Kickass.

Asked about the Exabyte posts on RaidForums, Troia said he wasn’t the only one who had access to the Verifications.io data, and that the full scope of what’s been going on would become clearer soon.

“More than one person can have the same name ‘Exabyte,” Troia said. “So much from both sides you are seeing is smoke and mirrors.”

Smoke and mirrors, indeed. It’s entirely possible this incident is an elaborate and cynical PR stunt by Troia to somehow spring a trap on the bad guys. Troia recently published a book on threat hunting, and on page 360 (PDF) he describes how he previously staged a hack against his own site and then bragged about the fake intrusion on cybercrime forums in a bid to gather information about specific cybercriminals who took the bait — the same people, by the way, he claims are behind the attack on his site.

MURKY WATERS

While the trading of hacked databases may not technically be illegal in the United States, it’s fair to say the U.S. Department of Justice (DOJ) takes a dim view of those who operate services marketed to cybercriminals.

In January 2020, U.S. authorities seized the domain of WeLeakInfo.com, an online service that for three years sold access to data hacked from other websites. Two men were arrested in connection with that seizure. In February 2017, the Justice Department took down LeakedSource, a service that operated similarly to WeLeakInfo.

The DOJ recently released guidance (PDF) to help threat intelligence companies avoid the risk of prosecution when gathering and purchasing data from illicit sources online. The guidelines suggest that some types of intelligence gathering — particularly exchanging ill-gotten information with others on crime forums as a way to gain access to other data or to increase one’s status on the forum — could be especially problematic.

“If a practitioner becomes an active member of a forum and exchanges information and communicates directly with other forum members, the practitioner can quickly become enmeshed in illegal conduct, if not careful,” reads the Feb. 2020 DOJ document.

The document continues:

“It may be easier for an undercover practitioner to extract information from sources on the forum who have learned to trust the practitioner’s persona, but developing trust and establishing bona fides as a fellow criminal may involve offering useful information, services, or tools that can be used to commit crimes.”

“Engaging in such activities may well result in violating federal criminal law. Whether a crime has occurred usually hinges on an individual’s actions and intent. A practitioner must avoid doing anything that furthers the criminal objectives of others on the forums. Even though the practitioner has no intention of committing a crime, assisting others engaged in criminal conduct can constitute the federal offense of aiding and abetting.”

“An individual may be found liable for aiding and abetting a federal offense if her or she takes an affirmative act — even an act that is lawful on its own — that is in furtherance of the crime and conducted with the intent of facilitating the crime’s commission.”

Cory DoctorowFull Employment

This week’s podcast is a reading of Full Employment, my latest Locus column. It’s a counter to the argument about automation-driven unemployment – namely, that we will have hundreds of years of full employment facing the climate emergency and remediating the damage it wreaks. From relocating all our coastal cities to replacing aviation routes with high-speed rails to the caring and public health work for hundreds of millions of survivors of plagues, floods and fires, we are in no danger of running out of work. The real question is: how will we mobilize people to do the work needed to save our species and the only known planet in the entire universe that can sustain it?

MP3

CryptogramA Peek into the Fake Review Marketplace

A personal account of someone who was paid to buy products on Amazon and leave fake reviews.

Fake reviews are one of the problems that everyone knows about, and no one knows what to do about -- so we all try to pretend doesn't exist.

Kevin RuddGlobal TV: Sino-Canadian Relations

INTERVIEW VIDEO
GLOBAL TV CANADA
‘WEST BLOCK’
RECORDED 10 JULY 2020
BROADCAST 12 JULY 2020

The post Global TV: Sino-Canadian Relations appeared first on Kevin Rudd.

Worse Than FailureA Revolutionary Vocabulary

Changing the course of a large company is much like steering the Titanic: it's probably too late, it's going to end in tears, and for some reason there's going to be a spirited debate about the bouyancy and stability of the doors.

Shena works at Initech, which is already a gigantic, creaking organization on the verge of toppling over. Management recognizes the problems, and knows something must be done. They are not, however, particularly clear about what that something should actually be, so they handed the Project Management Office a budget, told them to bring in some consultants, and do something.

The PMO dutifully reviewed the list of trendy buzzwords in management magazines, evaluated their budget, and brought in a team of consultants to "Establish a culture of continuous process improvement" that would "implement Agile processes" and "break down silos" to ensure "high functioning teams that can successfully self-organize to meet institutional objectives on time and on budget" using "the best-in-class tools" to support the transition.

Any sort of organizational change is potentially scary, to at least some of the staff. No matter how toxic or dysfunctional an organization is, there's always someone who likes the status quo. There was a fair bit of resistance, but the consultants and PMO were empowered to deal with them, laying off the fortunate, or securing promotions to vaguely-defined make-work jobs for the deeply unlucky.

There were a handful of true believers, the sort of people who had landed in their boring corporate gig years before, and had spent their time gently suggesting that things could possibly be better, slightly. They saw the changes as an opportunity, at least until they met the reality of trying to acutally commit to changes in an organization the size of Initech.

The real hazard, however, were the members of the Project Management Office who didn't actually care about Initech, their peers, or process change: they cared about securing their own little fiefdom of power. People like Debbie, who before the consultants came, had created a series of "Project Checkpoint Documents". Each project was required to fill out the 8 core documents, before any other work began, and Debbie was the one who reviewed them- which meant projects didn't progress without her say-so. Or Larry, who was a developer before moving into project management, and thus was in charge of the code review processes for the entire company, despite not having written anything in a language newer than COBOL85.

Seeing that the organizational changes would threaten their power, people like Debbie or Larry did the only thing they could do: they enthusiastically embraced the changes and labeled themselves the guardians of the revolution. They didn't need to actually do anything good, they didn't need to actually facilitate the changes, they just needed to show enthusiasm and look busy, and generate the appearance that they were absolutely critical to the success of the transition.

Debbie, specifically, got herself very involved in driving the adoption of Jira as their ticket tracking tool, instead of the hodge-podge of Microsoft Project, spreadsheets, emails, and home-grown ticketing systems. Since this involved changing the vocubulary they used to talk about projects, it meant Debbie could spend much of her time policing the language used to describe projects. She ran trainings to explain what an "Epic" or a "Story" were, about how to "rightsize stories so you can decompose them into actionable tasks". But everything was in flux, which meant the exact way Initech developers were meant to use Jira kept changing, almost on a daily basis.

Which is why Shena eventually received this email from the Project Management Office.

Teams,

As part of our process improvement efforts, we'll be making some changes to how we track work in JIRA. Epics are now to only be created by leadership. They will represent mission-level initiatives that we should all strive for. For all development work tracking, the following shall be the process going forward to account for the new organizational communication directive:

  • Treat Features as Epics
  • Treat Stories as Features
  • Treat Tasks as Stories
  • Treat Sub-tasks as Tasks
  • If you need Sub-tasks, create a spreadsheet to track them within your team.

Additionally, the following is now reflected in the status workflows and should be adhered to:

  • Features may not be deleted once created. Instead, use the Cancel functionality.
  • Cancelled tasks will be marked as Done
  • Done tasks should now be marked as Complete

As she read this glorious and transcended piece of Newspeak, Shena couldn't help but wonder about her laid off co-workers, and wonder if perhaps she shouldn't join them.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramFriday Squid Blogging: China Closing Its Squid Spawning Grounds

China is prohibiting squid fishing in two areas -- both in international waters -- for two seasons, to give squid time to recover and reproduce.

This is the first time China has voluntarily imposed a closed season on the high seas. Some experts regard it as an important step forward in China's management of distant-water fishing (DWF), and crucial for protecting the squid fishing industry. But others say the impact will be limited and that stronger oversight of fishing vessels is needed, or even a new fisheries management body specifically for squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

CryptogramBusiness Email Compromise (BEC) Criminal Ring

A criminal group called Cosmic Lynx seems to be based in Russia:

Dubbed Cosmic Lynx, the group has carried out more than 200 BEC campaigns since July 2019, according to researchers from the email security firm Agari, particularly targeting senior executives at large organizations and corporations in 46 countries. Cosmic Lynx specializes in topical, tailored scams related to mergers and acquisitions; the group typically requests hundreds of thousands or even millions of dollars as part of its hustles.

[...]

For example, rather than use free accounts, Cosmic Lynx will register strategic domain names for each BEC campaign to create more convincing email accounts. And the group knows how to shield these domains so they're harder to trace to the true owner. Cosmic Lynx also has a strong understanding of the email authentication protocol DMARC and does reconnaissance to assess its targets' specific system DMARC policies to most effectively circumvent them.

Cosmic Lynx also drafts unusually clean and credible-looking messages to deceive targets. The group will find a company that is about to complete an acquisition and contact one of its top executives posing as the CEO of the organization being bought. This phony CEO will then involve "external legal counsel" to facilitate the necessary payments. This is where Cosmic Lynx adds a second persona to give the process an air of legitimacy, typically impersonating a real lawyer from a well-regarded law firm in the United Kingdom. The fake lawyer will email the same executive that the "CEO" wrote to, often in a new email thread, and share logistics about completing the transaction. Unlike most BEC campaigns, in which the messages often have grammatical mistakes or awkward wording, Cosmic Lynx messages are almost always clean.

Sam VargheseRacism: Holding and Rainford-Brent do some plain speaking

Michael Anthony Holding, one of the feared West Indies pace bowlers from the 1970s and 1980s, bowled his best spell on 10 July, in front of the TV cameras.

Holding, in England to commentate on the Test series between England and the West Indies, took part in a roundtable on the Black Lives Matter protests which have been sweeping the world recently after an African-American man, George Floyd, was killed by a police officer in Minneapolis on May 25.

Holding speaks frankly, Very frankly. Along with former England cricketer Ebony Rainford-Brent, he spoke about the issues he had faced as a black man, the problems in cricket and how they could be resolved.

There was no bitterness in his voice, just audible pain and sadness. At one point, he came close to breaking down and later told one of the hosts that the memory of his mother being ostracised by her own family because she had married a very dark man had led to this.

Holding spoke of the need for education, to wipe out the centuries of conditioning that have resulted in black people knowing that white lives matter, while white people do not really care about black lives. He cited studies from American universities like Yale to make his points.

And much as white people will dismiss whatever he says, one could not escape the fact that here was a 66-year-old who had seen it all and some calling for a sane solution to the ills of racism.

He provided examples of racism from each of England, South Africa and Australia. In England, he cited the case when he was trying to flag down a cab while going home with his wife-to-be – a woman of Portuguese ancestry who is white. The driver had his meter up to indicate his cab was not occupied, but then on seeing Holding quickly offed the meter light and drove on. An Englishman of West Indian descent who recognised Holding, called out to him, “Hey Mikey, you have to put her in front.” To which Holding, characteristically, replied, “I would rather walk.”

In Australia, he cited a case during a tour; the West Indies teams were always put on a single floor in any hotel they stayed in. Holding said he and three of his fast bowling colleagues were coming down in a lift when it stopped at a floor on the way down. “There was a man waiting there,” Holding said. “He looked at us and did not get into the lift. That’s fine, maybe he was intimidated by the presence of four, big black men.

“But then, just before the lift doors closed, he shouted a racial eipthet at us.

And in South Africa, Holding cited a case when he and his Portuguese friend had gone to a hotel to stay. Someone came to him and was getting the details to book him in; meanwhile some other hotel staffer went to his companion and tried to book her in. “To their way of thinking, she could not possibly be with me, because she was white,” was Holding’s comment. “After all, I am black, am I not?”

Rainford-Brent, who took part in a formal video with Holding, also ventilated the problems that black women cricketers faced in England and spoke with tremendous feeling about the lack of people of colour at any level of the sport.

She was in tears occasionally as she spoke, as frankly as Holding, but again with no bitterness of the travails black people have when they join up to play cricket.

One only hopes that the talk does not end there and something is done about equality. Sky Sports, the broadcaster which ran this remarkable and unusual discussion, has pledged put 30 million pounds into efforts to narrow the gap. Holding’s view was that if enough big companies got involved then the gap would close that much faster.

If he has hope after what he has endured, then there is no reason why the rest of us should not.

Worse Than FailureError'd: They Said the Math Checks Out!

"So...I guess...they want me to spend more?" Angela A. writes.

 

"The '[object Object]' feature must be extremely rare and expensive considering that none of the phones in the list have it!" Jonathan writes.

 

Joel T. wrote, "I was checking this Covid-19 dashboard to see if it was safe to visit my family and well, I find it really thoughtful of them to cover the Null states, where I grew up."

 

"Thankfully after my appointment, I discovered I am healthier than my doctor's survey system," writes Paul T.

 

Peter C. wrote, "I am so glad that I went to college in {Other_Region}."

 

"I tried out this Excel currency converter template and it scrapes MSN.com/Money for up to date exchange rates," Kevin J. writes, "but, I think someone updated the website without thinking about this template."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Kevin RuddThe US-China Relationship Needs a New Organising Principle

The US-China Relationship Needs a New Organising Principle
The Hon Kevin Rudd AC
President of the Asia Society Policy Institute, New York.
China-US Think Tanks Online Forum
Peking University
9 July, 2020

It’s good to be in this gathering and to follow Foreign Minister Wang Yi and Dr Kissinger.

Foreign Minister Wang Yi before spoke about the great ship of US-China relations. The great ship of US-China relations currently has a number of holes in the side. It’s not quite the time to man the lifeboats. But I do see people preparing the lifeboats at present. So this conference comes at a critical time.

I’m conscious of the fact that we have many distinguished Americans joining us today. Ambassador Stapleton Roy (former US Ambassador to China in the Bush Administration); Kurt Campbell (former Assistant Secretary of State for East Asia under the Obama Administration) who have occupied distinguished offices on behalf the United States government in the past. Steve Orlins, President of the US National Committee on US-China Relations, together with other distinguished Americans, are with us as well. These individuals will have valuable perspectives from an American point of view. And as you know, I’m not an American. Nor am I Chinese. So what I will try to provide here are some separate reflections on the way forward.

We’re asked in this conference to address the correct way forward for US-China relations. I always love the Chinese use of the word zhengque, translated as “correct”. I think the Chinese in the title of today’s conference is Zhong-Mei Guanxi Weilaide Zhengque Fangxiang. Well, it depends on your definition of Zhengque or “correct”. Within a Chinese Marxist-Leninist system, zhengque has a particular meaning. Whereas for those of us from the reactionary West, we have a different idea of what zhengque or “correct” happens to be.

So given I’m an Australian and our traditional culture is to defy all rules, let me suggest that we informally amend our title to not finding a “correct” future for the US-China relationship, but to find a sustainable future for the US-China relationship. A relationship which is kechixu. And by sustainable, what do I mean? I mean four things. “Sustainable” within Chinese domestic politics. China is about to have the Beidaihe meetings in August where the US-China relationship will once again be the central topic. “Sustainable” within US domestic politics, both Republican and Democrat. A third criteria is, dare I add it, is “sustainable” for those of us who are third countries trying to deal with both of you. And fourthly, to do all the above without the relationship spiralling out of control into crisis, escalation, conflict or even war.

Now, as think-tankers, finding this intersecting circle between these four considerations is very difficult and perhaps impossible. We’re in search of the new “golden mean”, the new zhongyong in US-China relations. But I believe it’s worth the effort. In my allocated ten minutes, I’d therefore like to make just three points. And I base these three points on two critical political assumptions. One is that Xi Jinping is likely to remain in office after the 20th Party Congress in 2022. And my second critical assumption is that Biden is likely to be the President of the United States after January 2021.
My first point is we need to understand clearly why the US-China relationship is now in its most volatile condition in 30 years – and perhaps 50 years. I think we need to agree on three mega-changes that have been at work.

Number one: the changes in the underlying power structure of this relationship, that is the balance of power between the US and China, because of China’s rise. Those of us on the American side here who read the Chinese strategic and political literature know that this is openly discussed in China. That the shili pingheng has changed. And therefore this, in China’s mind, provides it with greater opportunity for policy leverage.

The second reason for the change is that those of us who observe China closely, as I’ve done for my entire professional life, have seen significant changes since the Central Party Work Conference on Foreign Affairs held in December of 2013. And since then China’s international policy has become more activist and more assertive across strategic policy, economic policy and human rights policy in this new age of fenfayouwie, and no more an age of taoguangyanghui or “hide your strength, bide your time.” We understand that change and we watch it carefully.

And the third factor that’s structurally at work is the Trump phenomenon and what “America First” has meant in practice. We’ve seen a trade war. We’ve seen the National Security Strategy. We’ve seen the National Defense Strategy. We’ve seen technology decoupling, in part. And we’ve seen it on human rights.

But this third structural factor associated with Trump will not change 180 degrees with Biden. It will change in tone. But my observation is that under Biden the substance is likely to be systematic, strategic competition rather than episodic strategic competition (as we have had with Trump). But with the United States under Biden still willing to work with China on certain defined global challenges like pandemics, like climate change, and possibly global financial management.

My second point is there can be no return, therefore, to previous strategic frameworks for managing a “sustainable” or kechixu US-China relationship in the future. Therefore we need to develop a new framework for doing that. I’m not talking about throwing away the three communiques (from the 1970’s, outlining the foundations of the US-China relationship). I’m talking about building something different based on the three communiques. I often read in the Chinese literature that Beijing hopes that the United States will recognise its errors and return to a “correct” understanding of the US-China relationship and resume the past forms of strategic engagement. For the reasons I’ve outlined, already, that will not happen.

Take one most recent example. China’s decision to enact the national security law on Hong Kong is seen in Beijing as a matter of national sovereignty – in 1997, sovereignty transferred from Britain back to China and that’s the end of the argument. But the essential nature of the American democracy does not permit Washington to see it that way. And hence the reaction from other Asian and Western democracies, which is likely to intensify over the coming months. This is just one example of the much broader point of the changing deep structure of the US-China relationship.

As I said before, the US-China relationship must remain anchored in the three communiques, including the most fundamental issue within them, which is Taiwan. But a new organising principle, and a new strategic architecture for a sustainable relationship, is now needed.
This brings me to my final point: some thoughts on what that alternative organising principle might be for the future.

Of course, the first option is that because we are now in an age of strategic competition, by definition there should be no framework – not even any rules of the road. I disagree with that. Because without any rules of the road, without any guide rails, without any guardrails, this would be highly destabilising. And therefore not sustainable.

The second option is to accept the reality of strategic competition but to mutually agree that strategic competition should be managed within defined parameters – and through a defined mechanism of the highest-level continuing strategic dialogue, communication and contact.

We might not yet be in Cold War 2.0 but, as I’ve written recently in Foreign Affairs magazine, I think we’re probably in Cold War 1.5. And if we’re not careful, we could end up in 2.0. But there are still lessons that we can learn from the US-Soviet relationship and the period of detente.

One of these lessons is in the importance of internal, high-level communication between the two sides that there should be an absolutely clear understanding of the red lines which exist, in particular on Taiwan. By red lines, I do not mean public statements from each other’s foreign ministries as an exercise in public diplomacy. I mean a core understanding about absolutely core interests, both in the military sphere but also in terms of future large-scale financial market actions as well. Within this framework, we also need a bilateral mechanism in place to ensure that these red lines are managed. And that’s quite different to what we have at present where we seem to be engaged in a free voyage of discovery, trying to work out where these red lines might lie. That’s quite dangerous. So to conclude: mutually agreed red lines, both in the military and the economic sphere.

But of course, that’s not the totality of the US-China relationship. But they do represent a foundation for the future strategic framework for the relationship. As for the rest of the architecture of the “managed strategic competition” that I speak of, my thoughts have not changed a lot since I wrote a paper on this at Harvard University, at the Kennedy School, about five years ago. I called it ‘constructive realism’ or jianshexing de xianshizhuyi. There I spoke about: number one, red lines where no common interests exist, but where core interests need to be understood; two, identifying difficult areas where cooperation is possible, for example, like you (both the Chinese and American sides) have recently done on the trade war; and three, those areas where bilateral and multilateral cooperation should be normal, even under current circumstances, like on pandemics, like on climate change, and like making the institutions of global governance operate in an efficient and effective way through the multilateral system, including the WHO.

And, of course, if this framework was to be mutually accepted, it would require changes on both sides – in both Washington and Beijing. But it will also require a new level of intellectual honesty with each other at the highest level of the relationship of the type we’ve seen in the records of Mao’s conversations with Zhou Enlai and with Kissinger and Nixon and those who were party to those critical conversations half a century ago. The framework I’ve just outlined is not dissimilar in some respects to what Foreign Minister Wang Yi spoke about just before.

My final thought is this: I’m very conscious it’s easy for an outsider to say these things because I’m an Australian. But I care passionately about both your countries and I really don’t want you to have a fight which ends up in a war. It’s not good for you. It’s not good for us. It’s not good for anybody.

I’m also conscious that in the United States, there are huge domestic challenges leading up to the US presidential elections – Black Lives Matter, COVID-19 – and I’m deeply mindful of Lincoln’s injunction that a house divided among itself cannot stand.

But I’m also mindful of China’s domestic challenges. I’m mindful of what I read in the People’s Daily today about a new zhengdang, a new party rectification campaign. I’m also mindful of traditional Chinese strategic wisdoms, for example, in the days of Liu Bang, Xiang Yu and shimian maifu or the dangers of “having challenges on ten different fronts at the same time” .

So much wisdom is required on both parties. But I do agree with Jiang Zemin and his continued reminder to us all that the US-China relationship remains zhongzhong zhi zhong or “the most important of the most important”. And if we don’t get that right, then we won’t be able to get anything else right.

This is an edited transcript of Mr Rudd’s remarks to the Peking University China-US Think Tanks Online Forum.

The post The US-China Relationship Needs a New Organising Principle appeared first on Kevin Rudd.

Dave HallLogging Step Functions to CloudWatch

Many AWS Services log to CloudWatch. Some do it out of the box, others need to be configured to log properly. When Amazon released Step Functions, they didn’t include support for logging to CloudWatch. In February 2020, Amazon announced StepFunctions could now log to CloudWatch. Step Functions still support CloudTrail logs, but CloudWatch logging is more useful for many teams.

Users need to configure Step Functions to log to CloudWatch. This is done on a per State Machine basis. Of course you could click around he console to enable it, but that doesn’t scale. If you use CloudFormation to manage your Step Functions, it is only a few extra lines of configuration to add the logging support.

In my example I will assume you are using YAML for your CloudFormation templates. I’ll save my “if you’re using JSON for CloudFormation you’re doing it wrong” rant for another day. This is a cut down example from one of my services:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: StepFunction with Logging Example.
Parameters:
Resources:
  StepFunctionExecRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: !Sub "states.${AWS::Region}.amazonaws.com"
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: StepFunctionExecRole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - lambda:InvokeFunction
            - lambda:ListFunctions
            Resource: !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:my-lambdas-namespace-*"
          - Effect: Allow
            Action:
            - logs:CreateLogDelivery
            - logs:GetLogDelivery
            - logs:UpdateLogDelivery
            - logs:DeleteLogDelivery
            - logs:ListLogDeliveries
            - logs:PutResourcePolicy
            - logs:DescribeResourcePolicies
            - logs:DescribeLogGroups
            Resource: "*"
  MyStateMachineLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /aws/stepfunction/my-step-function
      RetentionInDays: 14
  DashboardImportStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      StateMachineName: my-step-function
      StateMachineType: STANDARD
      LoggingConfiguration:
        Destinations:
          - CloudWatchLogsLogGroup:
             LogGroupArn: !GetAtt MyStateMachineLogGroup.Arn
        IncludeExecutionData: True
        Level: ALL
      DefinitionString:
        !Sub |
        {
          ... JSON Step Function definition goes here
        }
      RoleArn: !GetAtt StepFunctionExecRole.Arn

The key pieces in this example are the second statement in the IAM Role with all the logging permissions, the LogGroup defined by MyStateMachineLogGroup and the LoggingConfiguration section of the Step Function definition.

The IAM role permissions are copied from the example policy in the AWS documentation for using CloudWatch Logging with Step Functions. The CloudWatch IAM permissions model is pretty weak, so we need to grant these broad permissions.

The LogGroup definition creates the log group in CloudWatch. You can use what ever value you want for the LogGroupName. I followed the Amazon convention of prefixing everything with /aws/[service-name]/ and then appended the Step Function name. I recommend using the RetentionInDays configuration. It stops old logs sticking around for ever. In my case I send all my logs to ELK, so I don’t need to retain them in CloudWatch long term.

Finally we use the LoggingConfiguration to tell AWS where we want to send out logs. You can only specify a single Destinations. The IncludeExecutionData determines if the inputs and outputs of each function call is logged. You should not enable this if you are passing sensitive information between your steps. The verbosity of logging is controlled by Level. Amazon has a page on Step Function log levels. For dev you probably want to use ALL to help with debugging but in production you probably only need ERROR level logging.

I removed the Parameters and Output from the template. Use them as you need to.

CryptogramTraffic Analysis of Home Security Cameras

Interesting research on home security cameras with cloud storage. Basically, attackers can learn very basic information about what's going on in front of the camera, and infer when there is someone home.

News article.

Slashdot thread.

Worse Than FailureCodeSOD: Is It the Same?

A common source of bad code is when you have a developer who understands one thing very well, but is forced- either through organizational changes or the tides of history- to adapt to a new tool which they don’t understand. But a possibly more severe problem is modern developers not fully understanding why certain choices may have been made. Today’s code isn’t a WTF, it’s actually very smart.

Eric P was digging through some antique Fortran code, just exploring some retrocomputing history, and found a block which needed to check if two values were the same.

The normal way to do that in Fortran would be to use the .EQ. operator, e.g.:

LSAME = ( (LOUTP(IOUTP)).EQ.(LPHAS1(IOUTP)) )

Now, in this specific case, I happen to know that LOUTP(IOUTP) and LPHAS1(IOUTP) happen to be boolean expressions. I know this, in part, because of how the original developer actually wrote an equality comparison:

      LSAME = ((     LOUTP(IOUTP)).AND.(     LPHAS1(IOUTP)).OR.
               (.NOT.LOUTP(IOUTP)).AND.(.NOT.LPHAS1(IOUTP)) )

Now, Eric sent us two messages. In their first message:

This type of comparison appears in at least 5 different places and the result is then used in other unnecessarily complicated comparisons and assignments.

But that doesn’t tell the whole story. We need to understand the actual underlying purpose of this code. And the purpose of this block of code is to translate symbolic formula expressions to execute on Programmable Array Logic (PAL) devices.

PAL’s were an early form of programmable ROM, and to describe the logic you wanted them to perform, you had to give them instructions essentially in terms of gates. Essentially, you ’d throw a binary representation of the gate arrangements at the chip, and it would now perform computations for you.

So Eric, upon further review, followed up with a fresh message:

The program it is from was co-written by the manager of the project to create the PAL (Programmable Array Logic) device. So, of course, this is exactly, down to the hardware logic gate, how you would implement an equality comparison in a hardware PAL!
It’s all NOTs, ANDs, and ORs!

Programming is about building a model. Most of the time, we want our model to be clear to humans, and we focus on finding ways to describe that model in clear, unsurprising ways. But what’s “clear” and “unsurprising” can vary depending on what specifically we’re trying to model. Here, we’re modeling low-level hardware, really low-level, and what looks weird at first is actually pretty darn smart.

Eric also included a link to the code he was reading through, for the PAL24 Assembler.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Rondam RamblingsGame over for Hong Kong

The Washington Post reports: Early Wednesday, under a heavy police presence and before any public announcement about the matter, officials inaugurated the Office for Safeguarding National Security of the Central People’s Government in the Hong Kong Special Administrative Region at a ceremony that took place behind water-filled barricades. They played the Chinese national anthem and raised the

Worse Than FailureCodeSOD: A Private Matter

Tim Cooper was digging through the code for a trip-planning application. This particular application can plan a trip across multiple modes of transportation, from public transit to private modes, like rentable scooters or bike-shares.

This need to discuss private modes of transportation can lead to some… interesting code.

// for private: better = same
TIntSet myPrivates = getPrivateTransportSignatures(true);
TIntSet othersPrivates = other.getPrivateTransportSignatures(true);
if (myPrivates.size() != othersPrivates.size()
        || ! myPrivates.containsAll(othersPrivates)
        || ! othersPrivates.containsAll(myPrivates)) {
    return false;
}

This block of code seems to worry a lot about the details of othersPrivates, which frankly is a bad look. Mind your own business, code. Mind your own business.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramIoT Security Principles

The BSA -- also known as the Software Alliance, formerly the Business Software Alliance (which explains the acronym) -- is an industry lobbying group. They just published "Policy Principles for Building a Secure and Trustworthy Internet of Things."

They call for:

  • Distinguishing between consumer and industrial IoT.
  • Offering incentives for integrating security.
  • Harmonizing national and international policies.
  • Establishing regularly updated baseline security requirements

As with pretty much everything else, you can assume that if an industry lobbying group is in favor of it, then it doesn't go far enough.

And if you need more security and privacy principles for the IoT, here's a list of over twenty.

Worse Than FailureCodeSOD: Your Personal Truth

There are still some environments where C may not have easy access to a stdbool header file. That's easy to fix, of course. The basic pattern is to typedef an integer type as a boolean type, and then define some symbols for true and false. It's a pretty standard pattern, three lines of code, and unless you insist that FILE_NOT_FOUND is a boolean value, it's pretty hard to mess up.

Julien H was compiling some third-party C code, specifically in Visual Studio 2010, and as it turns out, VS2010 doesn't support C99, and thus doesn't have a stdbool. But, as stated, it's an easy pattern to implement, so the third party library went and implemented it:

#ifndef _STDBOOL_H_VS2010 #define _STDBOOL_H_VS2010 typedef int bool; static bool true = 1; static bool false = 0; #endif

We've asked many times, what is truth? In this case, we admit a very post-modern reality: what is "true" is not constant and unchanging, it cannot merely be enumerated, it must be variable. Truth can change, because here we've defined true and false as variables. And more than that, each person must identify their own truth, and by making these variables static, what we guarantee is that every .c file in our application can have its own value for truth. The static keyword, applied to a global variable, guarantees that each .c file gets its own scope.

I can only assume this header was developed by Jacques Derrida.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Cory DoctorowFull Employment

My latest Locus column is “Full Employment,” in which I forswear “Fully Automated Luxury Communism” as totally incompatible with the climate emergency, which will consume 100%+ of all human labor for centuries to come.

https://locusmag.com/2020/07/cory-doctorow-full-employment/

This fact is true irrespective of any breakthroughs in AI OR geoengineering. Technological unemployment is vastly oversold and overstated (for example, that whole thing about truck drivers is bullshit).

https://journals.sagepub.com/doi/10.1177/0019793919858079

But even if we do manage to automate away all of jobs, the climate emergency demands unimaginably labor intensive tasks for hundreds of years – jobs like relocating every coastal city inland, or caring for hundreds of millions of refugees.

Add to those: averting the exinctions of thousands of species, managing wave upon wave of zoonotic and insect-borne plagues, dealing with wildfires and tornados, etc.

And geoengineering won’t solve this: we’ve sunk a lot of heat into the oceans. It’s gonna warm them up. That’s gonna change the climate. It’s not gonna be good. Heading this off doesn’t just involve repealing thermodynamics – it also requires a time-machine.

But none of this stuff is insurmountable – it’s just hard. We CAN do this stuff. If you were wringing your hands about unemployed truckers, good news! They’ve all got jobs moving thousands of cities inland!

It’s just (just!) a matter of reorienting our economy around preserving our planet and our species.

And yeah, that’s hard, too – but if “the economy” can’t be oriented to preserving our species, we need a different economy.

Period.

CryptogramThiefQuest Ransomware for the Mac

There's a new ransomware for the Mac called ThiefQuest or EvilQuest. It's hard to get infected:

For your Mac to become infected, you would need to torrent a compromised installer and then dismiss a series of warnings from Apple in order to run it. It's a good reminder to get your software from trustworthy sources, like developers whose code is "signed" by Apple to prove its legitimacy, or from Apple's App Store itself. But if you're someone who already torrents programs and is used to ignoring Apple's flags, ThiefQuest illustrates the risks of that approach.

But it's nasty:

In addition to ransomware, ThiefQuest has a whole other set of spyware capabilities that allow it to exfiltrate files from an infected computer, search the system for passwords and cryptocurrency wallet data, and run a robust keylogger to grab passwords, credit card numbers, or other financial information as a user types it in. The spyware component also lurks persistently as a backdoor on infected devices, meaning it sticks around even after a computer reboots, and could be used as a launchpad for additional, or "second stage," attacks. Given that ransomware is so rare on Macs to begin with, this one-two punch is especially noteworthy.

CryptogramiPhone Apps Stealing Clipboard Data

iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.

While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.

This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.

EDITED TO ADD (7/6): LinkedIn and Reddit are doing this.

Worse Than FailureCodeSOD: Classic WTF: Dimensioning the Dimension

It was a holiday weekend in the US, so we're taking a little break. Yes, I know that most people took Friday off, but as this article demonstrates, dates remain hard. Original -- Remy

It's not too uncommon to see a Java programmer write a method to get the name of a month based on the month number. Sure, month name formatting is built in via SimpleDateFormat, but the documentation can often be hard to read. And since there's really no other place to find the answer, it's excusable that a programmer will just write a quick method to do this.

I have to say though, Robert Cooper's colleague came up with a very interesting way of doing this: adding an[other] index to an array ...

public class DateHelper
{
  private static final String[][] months = 
    { 
      { "0", "January" }, 
      { "1", "February" }, 
      { "2", "March" }, 
      { "3", "April" }, 
      { "4", "May" }, 
      { "5", "June" }, 
      { "6", "July" }, 
      { "7", "August" }, 
      { "8", "September" }, 
      { "9", "October" }, 
      { "10", "November" }, 
      { "11", "December" }
    };

  public static String getMonthDescription(int month)
  {
    for (int i = 0; i < months.length; i++)
    {
      if (Integer.parseInt(months[i][0]) == month)
      {
          return months[i][1];
      }
    }
    return null;
  }
}

If you enjoyed friday's post (A Pop-up Potpourii), make sure to check out the replies. There were some great error messages posted.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 09)

Here’s part nine of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

MEDebian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X – the latest 64bit version of the IBM mainframe. Here’s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp

Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap

# List the support for different binary formats
update-binfmts --display

# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1

# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade

# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd

# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x

# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it’s right for you and whether you need to alter it slightly for your system.

To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I’ve included that in the explanation because I think it’s important to have all security options enabled.

The “-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0” part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the “ccw” is the difference).

I’m not sure if “noresume” on the kernel command line is required, but it doesn’t do any harm. The “net.ifnames=0” stops systemd from renaming Ethernet devices. For the virtual networking the “ccw” again is a difference from other architectures.

Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run “dhclient eth0” and other similar commands to setup networking and allow ssh logins.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the “lockdown=confidentiality” kernel security option even though it’s not supported in 4.19 kernels, it doesn’t do any harm and when I upgrade systems to newer kernels I won’t have to remember to add it.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Try It Out

I’ve got a S390X system online for a while, “ssh root@s390x.coker.com.au” with password “SELINUX” to try it out.

PPC64

I’ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:

qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"

Above is the minimal qemu command that I’m using. Below is the result, it stops after the “4.” from “4.19.0-9”. Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
Linux ppc64le
#1 SMP Debian 4.

The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package.

Any suggestions on what I should try next would be appreciated.

,

Krebs on SecurityE-Verify’s “SSN Lock” is Nothing of the Sort

One of the most-read advice columns on this site is a 2018 piece called “Plant Your Flag, Mark Your Territory,” which tried to impress upon readers the importance of creating accounts at websites like those at the Social Security Administration, the IRS and others before crooks do it for you. A key concept here is that these services only allow one account per Social Security number — which for better or worse is the de facto national identifier in the United States. But KrebsOnSecurity recently discovered that this is not the case with all federal government sites built to help you manage your identity online.

A reader who was recently the victim of unemployment insurance fraud said he was told he should create an account at the Department of Homeland Security‘s myE-Verify website, and place a lock on his Social Security number (SSN) to minimize the chances that ID thieves might abuse his identity for employment fraud in the future.

DHS’s myE-Verify homepage.

According to the website, roughly 600,000 employers at over 1.9 million hiring sites use E-Verify to confirm the employment eligibility of new employees. E-Verify’s consumer-facing portal myE-Verify lets users track and manage employment inquiries made through the E-Verify system. It also features a “Self Lock” designed to prevent the misuse of one’s SSN in E-Verify.

Enabling this lock is supposed to mean that for the next year thereafter, if an unauthorized individual attempts to fraudulently use a SSN for employment authorization, he or she cannot use the SSN in E-Verify, even if the SSN is that of an employment authorized individual. But in practice, this service may actually do little to deter ID thieves from impersonating you to a potential employer.

At the request of the reader who reached out (and in the interest of following my own advice to plant one’s flag), KrebsOnSecurity decided to sign up for a myE-Verify account. After verifying my email address, I was asked to pick a strong password and select a form of multi-factor authentication (MFA). The most secure MFA option offered (a one-time code generated by an app like Google Authenticator or Authy) was already pre-selected, so I chose that.

The site requested my name, address, SSN, date of birth and phone number. I was then asked to select five questions and answers that might be asked if I were to try to reset my password, such as “In what city/town did you meet your spouse,” and “What is the name of the company of your first paid job.” I chose long, gibberish answers that had nothing to do with the questions (yes, these password questions are next to useless for security and frequently are the cause of account takeovers, but we’ll get to that in a minute).

Password reset questions selected, the site proceeded to ask four, multiple-guess “knowledge-based authentication” questions to verify my identity. The U.S. Federal Trade Commission‘s primer page on preventing job-related ID theft says people who have placed a security freeze on their credit files with the major credit bureaus will need to lift or thaw the freeze before being able to answer these questions successfully at myE-Verify. However, I did not find that to be the case, even though my credit file has been frozen with the major bureaus for years.

After successfully answering the KBA questions (the answer to each was “none of the above,” by the way), the site declared I’d successfully created my account! I could then see that I had the option to place a “Self Lock” on my SSN within the E-Verify system.

Doing so required me to pick three more challenge questions and answers. The site didn’t explain why it was asking me to do this, but I assumed it would prompt me for the answers in the event that I later chose to unlock my SSN within E-Verify.

After selecting and answering those questions and clicking the “Lock my SSN” button, the site generated an error message saying something went wrong and it couldn’t proceed.

Alas, logging out and logging back in again showed that the site did in fact proceed and that my SSN was locked. Joy.

But I still had to know one thing: Could someone else come along pretending to be me and create another account using my SSN, date of birth and address but under a different email address? Using a different browser and Internet address, I proceeded to find out.

Imagine my surprise when I was able to create a separate account as me with just a different email address (once again, the correct answers to all of the KBA questions was “none of the above”). Upon logging in, I noticed my SSN was indeed locked within E-Verify. So I chose to unlock it.

Did the system ask any of the challenge questions it had me create previously? Nope. It just reported that my SSN was now unlocked. Logging out and logging back in to the original account I created (again under a different IP and browser) confirmed that my SSN was unlocked.

ANALYSIS

Obviously, if the E-Verify system allows multiple accounts to be created using the same name, address, phone number, SSN and date of birth, this is less than ideal and somewhat defeats the purpose of creating one for the purposes of protecting one’s identity from misuse.

Lest you think your SSN and DOB is somehow private information, you should know this static data about U.S. residents has been exposed many times over in countless data breaches, and in any case these digits are available for sale on most Americans via Dark Web sites for roughly the bitcoin equivalent of a fancy caffeinated drink at Starbucks.

Being unable to proceed through knowledge-based authentication questions without first unfreezing one’s credit file with one or all of the big three credit bureaus (Equifax, Experian and TransUnion) can actually be a plus for those of us who are paranoid about identity theft. I couldn’t find any mention on the E-Verify site of which company or service it uses to ask these questions, but the fact that the site doesn’t seem to care whether one has a freeze in place is troubling.

And when the correct answer to all of the KBA questions that do get asked is invariably “none of the above,” that somewhat lessens the value of asking them in the first place. Maybe that was just the luck of the draw in my case, but also troubling nonetheless. Either way, these KBA questions are notoriously weak security because the answers to them often are pulled from records that are public anyway, and can sometimes be deduced by studying the information available on a target’s social media profiles.

Speaking of silly questions, relying on “secret questions” or “challenge questions” as an alternative method of resetting one’s password is severely outdated and insecure. A 2015 study by Google titled “Secrets, Lies and Account Recovery” (PDF) found that secret questions generally offer a security level that is far lower than just user-chosen passwords. Also, the idea that an account protected by multi-factor authentication could be undermined by successfully guessing the answer(s) to one or more secret questions (answered truthfully and perhaps located by thieves through mining one’s social media accounts) is bothersome.

Finally, the advice given to the reader whose inquiry originally prompted me to sign up at myE-Verify doesn’t seem to have anything to do with preventing ID thieves from fraudulently claiming unemployment insurance benefits in one’s name at the state level. KrebsOnSecurity followed up with four different readers who left comments on this site about being victims of unemployment fraud recently, and none of them saw any inquiries about this in their myE-Verify accounts after creating them. Not that they should have seen signs of this activity in the E-Verify system; I just wanted to emphasize that one seems to have little to do with the other.

CryptogramEncroChat Hacked by Police

French police hacked EncroChat secure phones, which are widely used by criminals:

Encrochat's phones are essentially modified Android devices, with some models using the "BQ Aquaris X2," an Android handset released in 2018 by a Spanish electronics company, according to the leaked documents. Encrochat took the base unit, installed its own encrypted messaging programs which route messages through the firm's own servers, and even physically removed the GPS, camera, and microphone functionality from the phone. Encrochat's phones also had a feature that would quickly wipe the device if the user entered a PIN, and ran two operating systems side-by-side. If a user wanted the device to appear innocuous, they booted into normal Android. If they wanted to return to their sensitive chats, they switched over to the Encrochat system. The company sold the phones on a subscription based model, costing thousands of dollars a year per device.

This allowed them and others to investigate and arrest many:

Unbeknownst to Mark, or the tens of thousands of other alleged Encrochat users, their messages weren't really secure. French authorities had penetrated the Encrochat network, leveraged that access to install a technical tool in what appears to be a mass hacking operation, and had been quietly reading the users' communications for months. Investigators then shared those messages with agencies around Europe.

Only now is the astonishing scale of the operation coming into focus: It represents one of the largest law enforcement infiltrations of a communications network predominantly used by criminals ever, with Encrochat users spreading beyond Europe to the Middle East and elsewhere. French, Dutch, and other European agencies monitored and investigated "more than a hundred million encrypted messages" sent between Encrochat users in real time, leading to arrests in the UK, Norway, Sweden, France, and the Netherlands, a team of international law enforcement agencies announced Thursday.

EncroChat learned about the hack, but didn't know who was behind it.

Going into full-on emergency mode, Encrochat sent a message to its users informing them of the ongoing attack. The company also informed its SIM provider, Dutch telecommunications firm KPN, which then blocked connections to the malicious servers, the associate claimed. Encrochat cut its own SIM service; it had an update scheduled to push to the phones, but it couldn't guarantee whether that update itself wouldn't be carrying malware too. That, and maybe KPN was working with the authorities, Encrochat's statement suggested (KPN declined to comment). Shortly after Encrochat restored SIM service, KPN removed the firewall, allowing the hackers' servers to communicate with the phones once again. Encrochat was trapped.

Encrochat decided to shut itself down entirely.

Lots of details about the hack in the article. Well worth reading in full.

The UK National Crime Agency called it Operation Venetic: "46 arrests, and £54m criminal cash, 77 firearms and over two tonnes of drugs seized so far."

Many more news articles. EncroChat website. Slashdot thread. Hacker News threads.

,

CryptogramFriday Squid Blogging: Strawberry Squid

Pretty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddDefence Strategy Is A National Scandal

The following statement was issued to the Sydney Morning Herald on 2 June 2020:

There are major gaps with the Morrison Government’s 2020 Strategic Policy Update and associated Force Structure Review. It is long on rhetoric and short on delivery.

My government’s 2009 Defence White Paper was the first to enunciate major changes in Australia’s strategic circumstances because of China’s rise. Malcolm Turnbull dismissed the white paper at the time as a relic of the Cold War. That white paper prescribed the biggest single expansion of the Royal Australian Navy since the war.

It also adjusted Australian strategic focus away from the Middle East where the previous conservative government had become hopelessly bogged down – both in its military deployments and in various irrational force structure decisions. The 2009 White Paper announced a doubling of the Australian submarine fleet. Eleven years later, that project has been comprehensively botched. The building of the first vessel has not even begun. In our current strategic circumstances, this is a national security scandal.

Second, Morrison’s 2020 Update talks about the so-called “Pacific Step-Up.” This is the second scandal. Only now are they attempting to restore Australia’s real aid level to the Pacific to where it was in 2013. This has opened the door far and wide to China’s aid presence because Australia was seen as an unreliable aid partner, which did not care about our friends in the region and which was dismissive of the island states’ existential concern about climate change.

Third, Morrison pretends that his government somehow invented Australia’s whole of government cyber capabilities. That is just wrong. Our white paper established the Cyber Security Operations Centre which was purpose-built for national responses to cyber incidents across government and critical private sector systems and infrastructure. The 2009 Cyber Security Strategy also established CERT Australia to provide the Australian government with all-source cyber situational awareness and an enhanced ability to facilitate operational responses to cybersecurity events of national importance. The current government has been slow in expanding these capabilities over recent years to keep pace with this rapidly expanding threat.

The post Defence Strategy Is A National Scandal appeared first on Kevin Rudd.

Sam VargheseDavid Warner must pay for his sins. As everyone else does

What does one make of the argument that David Warner, who was behind the ball tampering scandal in South Africa in 2018, was guilty of less of a mistake than Ben Stokes who indulged in public fights? And the argument that since Stokes has been made England captain for the series against the West Indies, Warner, who committed what is called a lesser sin, should also be in line for the role of Australian skipper?

The suggestion has been made by Peter Lalor, a senior cricket writer at The Australian, that Warner has paid a bigger price for past mistakes than Stokes. Does that argument really hold water?

Stokes was involved in a fracas outside a nightclub in Bristol a few years back and escaped tragedy and legal issues. He got into a brawl and was lucky to get off without a prison term.

But that had no connection to the game of cricket. And when we talk of someone bringing the game into disrepute, such incidents are not in the frame.

Had Stokes indulged in such immature behaviour on the field of play or insulted spectators who were at a game, then we would have to criticise the England board for handing him the mantle of leadership.

Warner brought the game into disrepute. He hatched a plot to use sandpaper in order to get the ball to swing, then shamefully recruited the youngest player in the squad, rookie Cameron Bancroft, to carry out his plan, and then expects to be forgiven and given a chance to lead the national team.

Really? Lalor argues that the ball tampering did not hurt anyone and the umpires did not even have to change the ball. Such is the level of morality we have come to, where arguments that have little ballast are advanced because nationalistic sentiments come into the picture.

It is troubling that as senior a writer as Lalor would seek to advance such an argument, when someone has clearly violated the spirit of the game. Doubtless there will be cynics who poke fun at any suggestion that cricket is still a gentleman’s game, but without those myths that surround this pursuit, would it still have its appeal?

The short answer to that is a resounding no.

Lalor argues that Stokes’ fate would have been different had he been an Australian, I doubt that very much because given the licence extended to Australian sports stars to behave badly, his indulgences would have been overlooked. The word used to excuse him would have ” larrikinism”.

But Warner cheated. And the Australian public, no matter what their shortcomings, do not like cheats.

Unfortunately, at a pivotal moment during the cricket team’s South African tour, this senior member could only think of cheating to win. That is sad, unfortunate, and even tragic. It speaks of a big moral chasm somewhere.

But once one has done the crime, one must do the time. Arguing as Lalor does, that both Steve Smith, the captain at the time, and Bancroft got away with no leadership bans, does not carry any weight.

The man who planned the crime was nailed with the heaviest punishment. And it is doubtful whether anyone who has a sense of justice would argue against that.

Worse Than FailureError'd: Take a Risk on NaN

"Sure, I know how long the free Standard Shipping will take, but maybe, just maybe, if I choose Economy, my package will have already arrived! Or never," Philip G. writes.

 

"To be honest, I would love to hear how a course on guitar will help me become certified on AWS!" Kevin wrote.

 

Gergő writes, "Hooray! I'm going to be so productive for the next 0 days!"

 

"I guess that inbox count is what I get for using Yahoo mail?" writes Becky R.

 

Marc W. wrote, "Try all you want, PDF Creator, but you'll never sweet talk me with your 'great' offer!"

 

Mark W. wrote, "My neighborhood has a personality split, but at least they're both Pleasant."

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

CryptogramThe Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that's a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that's all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains -- not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer -- maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They're caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don't simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn't going to supply any of these things, least of all in a strategic capacity that will result in resilience. What's necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants -- now disease factories -- rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn't anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It's also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren't competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we're buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it's not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That's no longer sustainable. We need inefficiency -- the right kind in the right way -- to ensure our security. No, it's not free. But it's worth the cost.

This essay previously appeared in Quartz.

MEDesklab Portable USB-C Monitor

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

Worse Than FailureABCD

As is fairly typical in our industry, Sebastian found himself working as a sub-contractor to a sub-contractor to a contractor to a big company. In this case, it was IniDrug, a pharmaceutical company.

Sebastian was building software that would be used at various steps in the process of manufacturing, which meant he needed to spend a fair bit of time in clean rooms, and on air-gapped networks, to prevent trade secrets from leaking out.

Like a lot of large companies, they had very formal document standards. Every document going out needed to have the company logo on it, somewhere. This meant all of the regular employees had the IniDrug logo in their email signatures, e.g.:

Bill Lumbergh
Senior Project Lead
  _____       _ _____                   
 |_   _|     (_|  __ \                  
   | |  _ __  _| |  | |_ __ _   _  __ _ 
   | | | '_ \| | |  | | '__| | | |/ _` |
  _| |_| | | | | |__| | |  | |_| | (_| |
 |_____|_| |_|_|_____/|_|   \__,_|\__, |
                                   __/ |
                                  |___/ 

At least, they did until Sebastian got an out of hours, emergency call. While they absolutely were not set up for remote work, Sebastian could get webmail access. And in the webmail client, he saw:

Bill Lumbergh
Senior Project Lead
ABCD

At first, Sebastian assumed Bill had screwed up his sigline. Or maybe the attachment broke? But as Sebastian hopped on an email chain, he noticed a lot of ABCDs. Then someone sent out a Word doc (because why wouldn’t you catalog your emergency response in a Word document?), and in the space where it usually had the IniDrug logo, it instead had “ABCD”.

The crisis resolved itself without any actual effort from Sebastian or his fellow contractors, but they had to reply to a few emails just to show that they were “pigs and not chickens”- they were committed to quality software. The next day, Sebastian mentioned the ABCD weirdness.

“I saw that too. I wonder what the deal was?” his co-worker Joanna said.

They pulled up the same document on his work computer, the logo displayed correctly. He clicked on it, and saw the insertion point blinking back at him. Then he glanced at the formatting toolbar and saw “IniDrug Logo” as the active font.

Puzzled, he selected the logo and changed the font. “ABCD” appeared.

IniDrug had a custom font made, hacked so that if you typed ABCD the resulting output would look like the IniDrug logo. That was great, if you were using a computer with the font installed, or if you remembered to make sure your word processor was embedding all your weird custom fonts.

Which also meant a bunch of outside folks were interacting with IniDrug employees, wondering why on Earth they all had “ABCD” in their siglines. Sebastian and Joanna got a big laugh about it, and shared the joke with their fellow contractors. Helping the new contractors discover this became a rite of passage. When contractors left for other contracts, they’d tell their peers, “It was great working at ABCD, but it’s time that I moved on.”

There were a lot of contractors, all chuckling about this, and one day in a shared break room, a bunch of T-Shirts appeared: plain white shirts with “ABCD” written on them in Arial.

That, as it turned out, was the bridge too far, and it got the attention of someone who was a regular IniDrug employee.

To the Contracting Team:
In the interests of maintaining a professional environment, we will be updating the company dress code. Shirts decorated with the text “ABCD” are prohibited, and should not be worn to work. If you do so, you will be asked to change or conceal the offending content.

Bill Lumbergh
Senior Project Lead
ABCD

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

MEIsolating PHP Web Sites

If you have multiple PHP web sites on a server in a default configuration they will all be able to read each other’s files in a default configuration. If you have multiple PHP web sites that have stored data or passwords for databases in configuration files then there are significant problems if they aren’t all trusted. Even if the sites are all trusted (IE the same person configures them all) if there is a security problem in one site it’s ideal to prevent that being used to immediately attack all sites.

mpm_itk

The first thing I tried was mpm_itk [1]. This is a version of the traditional “prefork” module for Apache that has one process for each HTTP connection. When it’s installed you just put the directive “AssignUserID USER GROUP” in your VirtualHost section and that virtual host runs as the user:group in question. It will work with any Apache module that works with mpm_prefork. In my experiment with mpm_itk I first tried running with a different UID for each site, but that conflicted with the pagespeed module [2]. The pagespeed module optimises HTML and CSS files to improve performance and it has a directory tree where it stores cached versions of some of the files. It doesn’t like working with copies of itself under different UIDs writing to that tree. This isn’t a real problem, setting up the different PHP files with database passwords to be read by the desired group is easy enough. So I just ran each site with a different GID but used the same UID for all of them.

The first problem with mpm_itk is that the mpm_prefork code that it’s based on is the slowest mpm that is available and which is also incompatible with HTTP/2. A minor issue of mpm_itk is that it makes Apache take ages to stop or restart, I don’t know why and can’t be certain it’s not a configuration error on my part. As an aside here is a site for testing your server’s support for HTTP/2 [3]. To enable HTTP/2 you have to be running mpm_event and enable the “http2” module. Then for every virtual host that is to support it (generally all https virtual hosts) put the line “Protocols h2 h2c http/1.1” in the virtual host configuration.

A good feature of mpm_itk is that it has everything for the site running under the same UID, all Apache modules and Apache itself. So there’s no issue of one thing getting access to a file and another not getting access.

After a trial I decided not to keep using mpm_itk because I want HTTP/2 support.

php-fpm Pools

The Apache PHP module depends on mpm_prefork so it also has the issues of not working with HTTP/2 and of causing the web server to be slow. The solution is php-fpm, a separate server for running PHP code that uses the fastcgi protocol to talk to Apache. Here’s a link to the upstream documentation for php-fpm [4]. In Debian this is in the php7.3-fpm package.

In Debian the directory /etc/php/7.3/fpm/pool.d has the configuration for “pools”. Below is an example of a configuration file for a pool:

# cat /etc/php/7.3/fpm/pool.d/example.com.conf
[example.com]
user = example.com
group = example.com
listen = /run/php/php7.3-example.com.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

Here is the upstream documentation for fpm configuration [5].

Then for the Apache configuration for the site in question you could have something like the following:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/run/php/php7.3-example.com.sock|fcgi://localhost/usr/share/wordpress/"

The “|fcgi://localhost” part is just part of the way of specifying a Unix domain socket. From the Apache Wiki it appears that the method for configuring the TCP connections is more obvious [6]. I chose Unix domain sockets because it allows putting the domain name in the socket address. Matching domains for the web server to port numbers is something that’s likely to be error prone while matching based on domain names is easier to check and also easier to put in Apache configuration macros.

There was some additional hassle with getting Apache to read the files created by PHP processes (the options include running PHP scripts with the www-data group, having SETGID directories for storing files, and having world-readable files). But this got things basically working.

Nginx

My Google searches for running multiple PHP sites under different UIDs didn’t turn up any good hits. It was only after I found the DigitalOcean page on doing this with Nginx [7] that I knew what to search for to find the way of doing it in Apache.

Krebs on SecurityRansomware Gangs Don’t Need PR Help

We’ve seen an ugly trend recently of tech news stories and cybersecurity firms trumpeting claims of ransomware attacks on companies large and small, apparently based on little more than the say-so of the ransomware gangs themselves. Such coverage is potentially quite harmful and plays deftly into the hands of organized crime.

Often the rationale behind couching these events as newsworthy is that the attacks involve publicly traded companies or recognizable brands, and that investors and the public have a right to know. But absent any additional information from the victim company or their partners who may be affected by the attack, these kinds of stories and blog posts look a great deal like ambulance chasing and sensationalism.

Currently, more than a dozen ransomware crime gangs have erected their own blogs to publish sensitive data from victims. A few of these blogs routinely issue self-serving press releases, some of which gallingly refer to victims as “clients” and cast themselves in a beneficent light. Usually, the blog posts that appear on ransom sites are little more than a teaser — screenshots of claimed access to computers, or a handful of documents that expose proprietary or financial information.

The goal behind the publication of these teasers is clear, and the ransomware gangs make no bones about it: To publicly pressure the victim company into paying up. Those that refuse to be extorted are told to expect that huge amounts of sensitive company data will be published online or sold on the dark web (or both).

Emboldened by their successes, several ransomware gangs recently have started demanding two ransoms: One payment to secure a digital key that can unlock files, folders and directories encrypted by their malware, and a second to avoid having any stolen information published or shared with others.

KrebsOnSecurity has sought to highlight ransomware incidents at companies whose core business involves providing technical services to others — particularly managed service providers that have done an exceptionally poor job communicating about the attack with their customers.

Overall, I’ve tried to use each story to call attention to key failures that frequently give rise to ransomware infections, and to offer information about how other companies can avoid a similar fate.

But simply parroting what professional extortionists have posted on their blog about victims of cybercrime smacks of providing aid and comfort to an enemy that needs and deserves neither.

Maybe you disagree, dear readers? Feel free to sound off in the comments below.

,

Rondam RamblingsI Will Remember Ricky Ray Rector

I've always been very proud of the fact that I came out in support of gay marriage before it was cool.   I have been correspondingly chagrined at my failure to speak out sooner and more vociferously about the shameful and systemic mistreatment of people of color, and black people in particular, in the U.S.  For what it's worth, I hereby confess my sins, acknowledge my white privilege, and

CryptogramSecuring the International IoT Supply Chain

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that's selling the stuff to consumers. There's a lot of detail between here and there, though, and it's all in the paper.

We also wrote a Lawfare post:

...we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Worse Than FailureCodeSOD: locurlicenseucesss

The past few weeks, I’ve been writing software for a recording device. This is good, because when I’m frustrated by the bugs I put in the code and I start cursing at it, it’s not venting, it’s testing.

There are all sorts of other little things we can do to vent. Imagine, if you will, you find yourself writing an if with an empty body, but an else clause that does work. You’d probably be upset at yourself. You might be stunned. You might be so tired it feels like a good idea at the time. You might be deep in the throes of “just. work. goddammit”. Regardless of the source of that strain, you need to let it out somewhere.

Emmanuelle found this is a legacy PHP codebase:

if(mynum($Q)){
    // Congratulations, you has locurlicenseucesss asdfghjk
} else {
    header("Location: feed.php");
}

I think being diagnosed with locurlicenseucesss should not be a cause for congratulations, but maybe I’m the one that’s confused.

Emmanuelle adds: “Honestly, I have no idea how this happened.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramAndroid Apps Stealing Facebook Credentials

Google has removed 25 Android apps from its store because they steal Facebook credentials:

Before being taken down, the 25 apps were collectively downloaded more than 2.34 million times.

The malicious apps were developed by the same threat group and despite offering different features, under the hood, all the apps worked the same.

According to a report from French cyber-security firm Evina shared with ZDNet today, the apps posed as step counters, image editors, video editors, wallpaper apps, flashlight applications, file managers, and mobile games.

The apps offered a legitimate functionality, but they also contained malicious code. Evina researchers say the apps contained code that detected what app a user recently opened and had in the phone's foreground.


Krebs on SecurityCOVID-19 ‘Breach Bubble’ Waiting to Pop?

The COVID-19 pandemic has made it harder for banks to trace the source of payment card data stolen from smaller, hacked online merchants. On the plus side, months of quarantine have massively decreased demand for account information that thieves buy and use to create physical counterfeit credit cards. But fraud experts say recent developments suggest both trends are about to change — and likely for the worse.

The economic laws of supply and demand hold just as true in the business world as they do in the cybercrime space. Global lockdowns from COVID-19 have resulted in far fewer fraudsters willing or able to visit retail stores to use their counterfeit cards, and the decreased demand has severely depressed prices in the underground for purloined card data.

An ad for a site selling stolen payment card data, circa March 2020.

That’s according to Gemini Advisory, a New York-based cyber intelligence firm that closely tracks the inventories of dark web stores trafficking in stolen payment card data.

Stas Alforov, Gemini’s director of research and development, said that since the beginning of 2020 the company has seen a steep drop in demand for compromised “card present” data — digits stolen from hacked brick-and-mortar merchants with the help of malicious software surreptitiously installed on point-of-sale (POS) devices.

Alforov said the median price for card-present data has dropped precipitously over the past few months.

“Gemini Advisory has seen over 50 percent decrease in demand for compromised card present data since the mandated COVID-19 quarantines in the United States as well as the majority of the world,” he told KrebsOnSecurity.

Meanwhile, the supply of card-present data has remained relatively steady. Gemini’s latest find — a 10-month-long card breach at dozens of Chicken Express locations throughout Texas and other southern states that the fast-food chain first publicly acknowledged today after being contacted by this author — saw an estimated 165,000 cards stolen from eatery locations recently go on sale at one of the dark web’s largest cybercrime bazaars.

“Card present data supply hasn’t wavered much during the COVID-19 period,” Alforov said. “This is likely due to the fact that most of the sold data is still coming from breaches that occurred in 2019 and early 2020.”

A lack of demand for and steady supply of stolen card-present data in the underground has severely depressed prices since the beginning of the COVID-19 pandemic. Image: Gemini Advisory

Naturally, crooks who ply their trade in credit card thievery also have been working from home more throughout the COVID-19 pandemic. That means demand for stolen “card-not-present” data — customer payment information extracted from hacked online merchants and typically used to defraud other e-commerce vendors — remains high. And so have prices for card-not-present data: Gemini found prices for this commodity actually increased slightly over the past few months.

Andrew Barratt is an investigator with Coalfire, the cyber forensics firm hired by Chicken Express to remediate the breach and help the company improve security going forward. Barratt said there’s another curious COVID-19 dynamic going on with e-commerce fraud recently that is making it more difficult for banks and card issuers to trace patterns in stolen card-not-present data back to hacked web merchants — particularly smaller e-commerce shops.

“One of the concerns that has been expressed to me is that we’re getting [fewer] overlapping hotspots,” Barratt said. “For a lot of the smaller, more frequently compromised merchants there has been a large drop off in transactions. Whilst big e-commerce has generally done okay during the COVID-19 pandemic, a number of more modest sized or specialty online retailers have not had the same access to their supply chain and so have had to close or drastically reduce the lines they’re selling.”

Banks routinely take groups of customer cards that have experienced fraudulent activity and try to see if some or all of them were used at the same merchant during a similar timeframe, a basic anti-fraud process known as “common point of purchase” or CPP analysis. But ironically, this analysis can become more challenging when there are fewer overall transactions going through a compromised merchant’s site, Barratt said.

“With a smaller transactional footprint means less Common Point of Purchase alerts and less data to work on to trigger a forensic investigation or fraud alert,” Barratt said. “It does also mean less fraud right now – which is a positive. But one of the big concerns that has been raised to us as investigators — literally asking if we have capacity for what’s coming — has been that merchants are getting compromised by ‘lie in wait’ type intruders.”

Barratt says there’s a suspicion that hackers may have established beachheads [breachheads?] in a number of these smaller online merchants and are simply biding their time. If and when transaction volumes for these merchants do pick up, the concern is then hackers may be in a better position to mix the sale of cards stolen from many hacked merchants and further confound CPP analysis efforts.

“These intruders may have a beachhead in a number of small and/or middle market e-commerce entities and they’re just waiting for the transaction volumes to go back up again and they’ve suddenly got the capability to have skimmers capturing lots of card data in the event of a sudden uptick in consumer spending,” he said. “They’d also have a diverse portfolio of compromise so could possibly even evade common point of purchase detection for a while too. Couple all of that with major shopping cart platforms going out of support (like Magento 1 this month) and furloughed IT and security staff, and there’s a potentially large COVID-19 breach bubble waiting to pop.”

With a majority of payment cards issued in the United States now equipped with a chip that makes the cards difficult and expensive for thieves to clone, cybercriminals have continued to focus on hacking smaller merchants that have not yet installed chip card readers and are still swiping the cards’ magnetic stripe at the register.

Barratt said his company has tied the source of the breach to malware known as “PwnPOS,” an ancient strain of point-of-sale malware that first surfaced more than seven years ago, if not earlier.

Chicken Express CEO Ricky Stuart told KrebsOnSecurity that apart from “a handful” of locations his family owns directly, most of his 250 stores are franchisees that decide on their own how to secure their payment operations. Nevertheless, the company is now forced to examine each store’s POS systems to remediate the breach.

Stuart blamed the major point-of-sale vendors for taking their time in supporting and validating chip-capable payment systems. But when asked how many of the company’s 250 stores had chip-capable readers installed, Stuart said he didn’t know. Ditto for the handful of stores he owns directly.

“I don’t know how many,” he said. “I would think it would be a majority. If not, I know they’re coming.”

Worse Than FailureCodeSOD: The Data Class

There has been a glut of date-related code in the inbox lately, so it’s always a treat where TRWTF isn’t how they fail to handle dates, and instead, something else. For example, imagine you’re browsing a PHP codebase and see something like:

$fmtedDate = data::now();

You’d instantly know that something was up, just by seeing a class named data. That’s what got Vania’s attention. She dug in, and found a few things.

First, clearly, data is a terrible name for a class. It’d be a terrible name if it was a data access layer, but it has a method now, which tells us that it’s not just handling data.

But it’s not handling data at all. data is more precisely a utility class- the dumping ground for methods that the developer couldn’t come up with a way to organize. It contains 58 methods, 38 of which are 100% static methods, 7 of which should have been declared static but weren’t, and the remainder are actually interacting with $this. All in all, this class must be incredibly “fun”.

Let’s look at the now implementation:

class data
{
    // ...

    public static function now()
    {
        return date('Y', time())."-".date('m', time())."-".date('d')." ".date('H').":".	date('i').":".	date('s');
    }
}

Finally, we get to your traditional bad date handling code. Instead of just using a date format string to get the desired output, we manually construct the string by invoking date a bunch of times. There are some “interesting” choices here- you’ll note that the PHP date function accepts a date parameter- so you can format an arbitrary date- and sometimes they pass in the result of calling time() and sometimes they don’t. This is mostly not a problem, since date will invoke time itself if you don’t hand it one, so that’s just unnecessary.

But Vania adds some detail:

Because of the multiple calls to time() this code contains a subtle race condition. If it is called at, say, 2019-12-31 23:59:59.999, the date('Y', time()) part will evaluate to “2019”. If the time now ticks over to 2020-01-01 00:00:00.000, the next date() call will return a month value of “01” (and so on for the rest of the expression). The result is a timestamp of “2019–01–01 00:00:00”, which is off by a year. A similar issue happens at the end of every month, day, hour, and minute; i.e. every minute there is an opportunity for the result to be off by a minute.

It’s easy to fix, of course, you could just: return date('Y-m-d H:i:s');, which does exactly the same thing, but correctly. Unfortunately, Vania has this to add:

Unfortunately there is no budget for making this kind of change to the application. Also, its original authors seem to have been a fan of “code reuse” by copy/paste: There are four separate instances of this now() function in the codebase, all containing exactly the same code.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 08)

Here’s part eight of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureAnother Immovable Spreadsheet

OrderStatistics.gif

Steve had been working as a web developer, but his background was in mathematics. Therefore, when a job opened up for internal transfer to the "Statistics" team, he jumped on it and was given the job without too much contest. Once there, he was able to meet the other "statisticians:" a group of well-meaning businessfolk with very little mathematical background who used The Spreadsheet to get their work done.

The Spreadsheet was Excel, of course. To enter data, you had to cut and paste columns from various tools into one of the many sheets, letting the complex array of formulas calculate the numbers needed for the quarterly report. Shortly before Steve's transfer, there had apparently been a push to automate some of the processes with SAS, a tool much more suited to this sort of work than a behemoth of an Excel spreadsheet.

A colleague named Stu showed Steve the ropes. Stu admitted there was indeed a SAS process that claimed to do the same functions as The Spreadsheet, but nobody was using it because nobody trusted the numbers that came out of it.

Never the biggest fan of Excel, Steve decided to throw his weight behind the SAS process. He ran the SAS algorithms multiple times, giving the outputs to Stu to compare against the Excel spreadsheet output. The first three iterations, everything seemed to match exactly. On the fourth, however, Stu told him that one of the outputs was off by 0.2.

To some, this was vindication of The Spreadsheet; after all, why would they need some fancy-schmancy SAS process when Excel worked just fine? Steve wasn't so sure. An error in the code might lead to a big discrepancy, but this sounded more like a rounding error than anything else.

Steve tracked down the relevant documentation for Excel and SAS, and found that both used 64-bit floating point numbers on the 32-bit Windows machines that the calculations were run on. Given that all the calculations were addition and multiplication with no exponents, the mistake had to be in either the Excel code or the SAS code.

Steve stepped through the SAS process, ensuring that the intermediate outputs in SAS matched the accompanying cells in the Excel sheet. When he'd just about given up hope, he found the issue: a ROUND command, right at the end of the chain where it didn't belong.

All of the SAS code in the building had been written by a guy named Brian. Even after Steve had taken over writing SAS, people still sought out Brian for updates and queries, despite his having other work to do.

Steve had no choice but to do the same. He stopped by Brian's cube, knocking perfunctorily before asking, "Why is there a ROUND command at the end of the SAS?"

"There isn't. What?" replied Brian, clearly startled out of his thinking trance.

"No, look, there is," replied Steve, waving a printout. "Why is it there?"

"Oh. That." Brian shrugged. "Excel was displaying only one decimal place for some godforsaken reason, and they wanted the SAS output to be exactly the same."

"I should've known," said Steve, disgustedly. "Stu told me it matched, but it can't have been matching exactly this whole time, not with rounding in there."

"Sure, man. Whatever."

Sadly, Steve was transferred again before the next quarterly run—this time to a company doing proper statistical analysis, not just calculating a few figures for the quarterly presentation. He instructed Stu how to check to fifteen decimal places, but didn't hold out much hope that SAS would come to replace the Excel sheet.

Steve later ran into Stu at a coffee hour. He asked about how the replacement was going.

"I haven't had time to check the figures from SAS," Stu replied. "I'm too busy with The Spreadsheet as-is."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramCOVID-19 Risks of Flying

I fly a lot. Over the past five years, my average speed has been 32 miles an hour. That all changed mid-March. It's been 105 days since I've been on an airplane -- longer than any other time in my adult life -- and I have no future flights scheduled. This is all a prelude to saying that I have been paying a lot of attention to the COVID-related risks of flying.

We know a lot more about how COVID-19 spreads than we did in March. The "less than six feet, more than ten minutes" model has given way to a much more sophisticated model involving airflow, the level of virus in the room, and the viral load in the person who might be infected.

Regarding airplanes specifically: on the whole, they seem safer than many other group activities. Of all the research about contact tracing results I have read, I have seen no stories of a sick person on an airplane infecting other passengers. There are no superspreader events involving airplanes. (That did happen with SARS.) It seems that the airflow inside the cabin really helps.

Airlines are trying to make things better: blocking middle seats, serving less food and drink, trying to get people to wear masks. (This video is worth watching.) I've started to see airlines requiring masks and banning those who won't, and not just strongly encouraging them. (If mask wearing is treated the same as the seat belt wearing, it will make a huge difference.) Finally, there are a lot of dumb things that airlines are doing.

This article interviewed 511 epidemiologists, and the general consensus was that flying is riskier than getting a haircut but less risky than eating in a restaurant. I think that most of the risk is pre-flight, in the airport: crowds at the security checkpoints, gates, and so on. And that those are manageable with mask wearing and situational awareness. So while I am not flying yet, I might be willing to soon. (It doesn't help that I get a -1 on my COVID saving throw for type A blood, and another -1 for male pattern baldness. On the other hand, I think I get a +3 Constitution bonus. Maybe, instead of sky marshals we can have high-level clerics on the planes.)

And everyone: wear a mask, and wash your hands.

EDITED TO ADD (6/27): Airlines are starting to crowd their flights again.

,

Krebs on SecurityRussian Cybercrime Boss Burkov Gets 9 Years

A well-connected Russian hacker once described as “an asset of supreme importance” to Moscow was sentenced on Friday to nine years in a U.S. prison after pleading guilty to running a site that sold stolen payment card data, and to administering a highly secretive crime forum that counted among its members some of the most elite Russian cybercrooks.

Alexei Burkov, seated second from right, attends a hearing in Jerusalem in 2015. Photo: Andrei Shirokov / Tass via Getty Images.

Aleksei Burkov of St. Petersburg, Russia admitted to running CardPlanet, a site that sold more than 150,000 stolen credit card accounts, and to being a founder of DirectConnection — a closely guarded underground community that attracted some of the world’s most-wanted Russian hackers.

As KrebsOnSecurity noted in a November 2019 profile of Burkov’s hacker nickname ‘k0pa,’ “a deep dive into the various pseudonyms allegedly used by Burkov suggests this individual may be one of the most connected and skilled malicious hackers ever apprehended by U.S. authorities, and that the Russian government is probably concerned that he simply knows too much.”

Burkov was arrested in 2015 on an international warrant while visiting Israel, and over the ensuing four years the Russian government aggressively sought to keep him from being extradited to the United States.

When Israeli authorities turned down requests to send him back to Russia — supposedly to face separate hacking charges there — the Russians then imprisoned Israeli citizen Naama Issachar on trumped-up drug charges in a bid to trade prisoners. Nevertheless, Burkov was extradited to the United States in November 2019. Russian President Vladimir Putin pardoned Issachar in January 2020, just hours after Burkov pleaded guilty.

Arkady Bukh is a New York attorney who has represented a number of accused and convicted cybercriminals from Eastern Europe and Russia. Bukh said he suspects Burkov did not cooperate with Justice Department investigators apart from agreeing not to take the case to trial.

“Nine years is a huge sentence, and the government doesn’t give nine years to defendants who cooperate,” Bukh said. “Also, the time span [between Burkov’s guilty plea and sentencing] was very short.”

DirectConnection was something of a Who’s Who of major cybercriminals, and many of its most well-known members have likewise been extradited to and prosecuted by the United States. Those include Sergey “Fly” Vovnenko, who was sentenced to 41 months in prison for operating a botnet and stealing login and payment card data. Vovnenko also served as administrator of his own cybercrime forum, which he used in 2013 to carry out a plan to have Yours Truly framed for heroin possession.

As noted in last year’s profile of Burkov, an early and important member of DirectConnection was a hacker who went by the moniker “aqua” and ran the banking sub-forum on Burkov’s site. In December 2019, the FBI offered a $5 million bounty leading to the arrest and conviction of aqua, who’s been identified as Maksim Viktorovich Yakubets. The Justice Department says Yakubets/aqua ran a transnational cybercrime organization called “Evil Corp.” that stole roughly $100 million from victims.

In this 2011 screenshot of DirectConnection, we can see the nickname of “aqua,” who ran the “banking” sub-forum on DirectConecttion. Aqua, a.k.a. Maksim V. Yakubets of Russia, now has a $5 million bounty on his head from the FBI.

According to a statement of facts in Burkov’s case, the author of the infamous SpyEye banking trojan — Aleksandr “Gribodemon” Panin— was personally vouched for by Burkov. Panin was sentenced in 2016 to more than nine years in prison.

Other top DirectConnection members include convicted credit card fraudsters Vladislav “Badb” Horohorin and Sergey “zo0mer” Kozerev, as well as the infamous spammer and botnet master Peter “Severa” Levashov.

Also on Friday, the Justice Department said it obtained a guilty plea from another top cybercrime forum boss — Sergey “Stells” Medvedev — who admitted to administering the Infraud forum. The government says Infraud, whose slogan was “In Fraud We Trust,” attracted more than 10,000 members and inflicted more than $568 million in actual losses from the sale of stolen identity information, payment card data and malware.

A copy of the 108-month judgment entered against Burkov is available here (PDF).

MELinks June 2020

Bruce Schneier wrote an informative post about Zoom security problems [1]. He recommends Jitsi which has a Debian package of their software and it’s free software.

Axel Beckert wrote an interesting post about keyboards with small numbers of keys, as few as 28 [2]. It’s not something I’d ever want to use, but interesting to read from a computer science and design perspective.

The Guardian has a disturbing article explaining why we might never get a good Covid19 vaccine [3]. If that happens it will change our society for years if not decades to come.

Matt Palmer wrote an informative blog post about private key redaction [4]. I learned a lot from that. Probably the simplest summary is that you should never publish sensitive data unless you are certain that all that you are publishing is suitable, if you don’t understand it then you don’t know if it’s suitable to be published!

This article by Umair Haque on eand.co has some interesting points about how Freedom is interpreted in the US [5].

This article by Umair Haque on eand.co has some good points about how messed up the US is economically [6]. I think that his analysis is seriously let down by omitting the savings that could be made by amending the US healthcare system without serious changes (EG by controlling drug prices) and by reducing the scale of the US military (there will never be another war like WW2 because any large scale war will be nuclear). If the US government could significantly cut spending in a couple of major areas they could then put the money towards fixing some of the structural problems and bootstrapping a first-world economic system.

The American Conservatrive has an insightful article “Seven Reasons Police Brutality is Systemic Not Anecdotal [7].

Scientific American has an informative article about how genetic engineering could be used to make a Covid-19 vaccine [8].

Rike wrote an insightful post about How Language Changes Our Concepts [9]. They cover the differences between the French, German, and English languages based on gender and on how the language limits thoughts. Then conclude with the need to remove terms like master/slave and blacklist/whitelist from our software, with a focus on Debian but it’s applicable to all software.

Gunnar Wolf also wrote an insightful post On Masters and Slaves, Whitelists and Blacklists [10], they started with why some people might not understand the importance of the issue and then explained some ways of addressing it. The list of suggested terms includes Primary-secondary, Leader-follower, and some other terms which have slightly different meanings and allow more precision in describing the computer science concepts used. We can be more precise when describing computer science while also not using terms that marginalise some groups of people, it’s a win-win!

Both Rike and Gunnar were responding to a LWN article about the plans to move away from Master/Slave and Blacklist/Whitelist in the Linux kernel [11]. One of the noteworthy points in the LWN article is that there are about 70,000 instances of words that need to be changed in the Linux kernel so this isn’t going to happen immediately. But it will happen eventually which is a good thing.

,

CryptogramFriday Squid Blogging: Fishing for Jumbo Squid

Interesting article on the rise of the jumbo squid industry as a result of climate change.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 07)

Here’s part seven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 06)

Here’s part six of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

,

LongNowRacial Injustice & Long-term Thinking

Long Now Community,

Since 01996, Long Now has endeavored to foster long-term thinking in the world by sharing perspectives on our deep past and future. We have too often failed, however, to include and listen to Black perspectives.

Racism is a long-term civilizational problem with deep roots in the past, profound effects in the present, and so many uncertain futures. Solving the multigenerational challenge of racial inequality requires many things, but long-term thinking is undoubtedly one of them. As an institution dedicated to the long view, we have not addressed this issue enough. We can and will do better.

We are committed to surfacing these perspectives on both the long history of racial inequality and possible futures of racial justice going forward, both through our speaker series and in the resources we share online. And if you have any suggestions for future resources or speakers, we are actively looking.

Alexander Rose

Executive Director, Long Now

Learn More

  • A recent episode of this American Life explored Afrofuturism: “It’s more than sci-fi. It’s a way of looking at black culture that’s fantastic, creative, and oddly hopeful—which feels especially urgent during a time without a lot of optimism.”
  • The 1619 Project from The New York Times, winner of the 02020 Pulitzer Prize for Commentary,  re-examines the 400 year-legacy of slavery. 
  • A paper from political scientists at Stanford and Harvard analyzes the long-term effects of slavery on Southern attitudes toward race and politics.
  • Ava DuVernay’s 13th is a Netflix documentary about the links between slavery and the US penal system. It is available to watch on YouTube for free here
  • “The Case for Reparations” is a landmark essay by Ta-Nehisi Coates about the institutional racism of housing discrimination. 
  • I Am Not Your Negro is a documentary exploring the history of racism in the United States through the lens of the life of writer and activist James Baldwin. The documentary is viewable on PBS for free here.
  • Science Fiction author N.K. Jemisin defies convention and creates worlds informed by the structural forces that cause inequality.