Planet Russell

,

Sam VargheseOne feels sorry for Emma Alberici, but that does not mask the fact that she was incompetent

Last month, the Australian Broadcasting Corporation, a taxpayer-funded entity, made several people redundant, due to a cut in funding by the Federal Government. Among them was Emma Alberici, a presenter who has been lionised a great deal as someone with great talents, but actually a mediocre hack who lacks ability.

What marked Alberici out is the fact that she had the glorious title of chief economics correspondent at the ABC, but was never seen on any TV show giving her opinion about anything to do with economics. Over the last two years, China and the US have been engaged in an almighty stoush; Australia, as a country that considers the US as its main ally and has China as its major trading partner, has naturally been of interest too.

But the ABC always put forward people like Peter Ryan, a senior business correspondent, or Ian Verrender, the business editor, when there was a need for someone to appear during a news bulletin and provide a little insight into these matters. Alberici, it seemed, was persona non grata.

The reason why she was kept behind the curtains, it turns out, was because she did not really know much about economics. This was made plain in April 2018 when she wrote what columnist Joe Aston of the Australian Financial Review described as “an undergraduate yarn touting ‘exclusive analysis’ of one publicly available document from which she derived that ‘one in five of Australia’s top companies has paid zero tax for the past three years’ and that ‘Australia’s largest companies haven’t paid corporate tax in 10 years’.”

This article drew much criticism from many people, including politicians, who complained to the ABC. Her magnum opus has now disappeared from the web – an archived version is here. The fourth paragraph read: “Exclusive analysis released by ABC today reveals one in five of Australia’s top companies has paid zero tax for the past three years.” A Twitter teaser said: “Australia’s largest companies haven’t paid corporate tax in 10 years.”

As Aston pointed out in withering tones: “Both premises fatally expose their author’s innumeracy. The first is demonstrably false. Freely available data produced by the Australian Taxation Office show that 32 of Australia’s 50 largest companies paid $19.33 billion in company tax in FY16 (FY17 figures are not yet available). The other 18 paid nothing. Why? They lost money, or were carrying over previous losses.

“Company tax is paid on profits, so when companies make losses instead of profits, they don’t pay it. Amazing, huh? And since 1989, the tax system has allowed losses in previous years to be carried forward – thus companies pay tax on the rolling average of their profits and losses. This is stuff you learn in high school. Except, obviously, if your dream by then was to join the socialist collective at Ultimo, to be a superstar in the cafes of Haberfield.”

As expected, Alberici protested and even called in her lawyer when the ABC took down the article. It was put up again with several corrections. But the damage had been done and the great economics wizard had been unmasked. She did not take it well, protesting that 17 years ago she was nominated for a Walkley Award on tax minimisation.

Underlining her lack of knowledge of economics, Alberici, who was paid a handsome $189,000 per annum by the ABC, exposed herself again in May the same year. In an interview with Labor shadow treasurer Jim Chalmers, she kept pushing him as to what he would do with the higher surpluses that he proposed to run were a Labor government to be elected.

Aston has his own inimitable style, so let me use his words: “This time, Alberici’s utter non-comprehension of public sector accounting is laid bare in three unwitting confessions in her studio interview with Labor’s finance spokesman Jim Chalmers after Tuesday’s Budget.

“[Shadow Treasurer] Chris Bowen on the weekend told my colleague Barrie Cassidy that you want to run higher surpluses than the Government. How much higher than the Government and what would you do with that money?”

“Wearing that unmistakable WTF expression on his dial, Chalmers was careful to evade her illogical query.

“Undeterred, Alberici pressed again. ‘And what will you do with those surpluses?’ A second time, Chalmers dissembled.

“A third time, the cock crowed (this really was as inevitable as the betrayal of Jesus by St Peter). ‘Sorry, no, you said you would run higher surpluses, so what happens to that money?’

“Hanged [sic] by her own persistence. Chalmers, at this point, put her out of her blissful ignorance – or at least tried! ‘Surpluses go towards paying down debt’.

“Bingo. C’est magnifique! Hey, we majored in French. After 25 years in business and finance reporting, Alberici apparently thinks a budget surplus is an account with actual money in it, not a figure reached by deducting the Commonwealth’s expenditure from its revenue.”

Alberici has continued to complain that her knowledge of economics is adequate. She has been extremely annoyed when such criticism comes from any Murdoch publication. But the fact is that she is largely ignorant of the basics of economics.

Her incompetence is not limited to this one field alone. As I have pointed out, there have been occasions in the past, when she has shown that her knowledge of foreign affairs is as good as her knowledge of economics.

After she was made redundant, Alberici published a series of tweets which she later removed. An archive of that is here.

Alberici was the host of a program called Business Breakfast some years ago. It failed and had to be taken off the air. Then she was made the main host of the ABC’s flagship late news program, Lateline. That program was taken off the air last year due to budget cuts. However, Alberici’s performance did not set the Yarra on fire, to put it mildly.

Now she has joined an insurance comparison website, Compare The Market. That company has a bit of a dodgy reputation as the Australian news website The New Daily pointed out in 2014. As reporter George Lekakis wrote: “The website is owned by a leading global insurer that markets many of its own its products on the site. While comparethemarket.com.au offers a broad range of products for life insurance and travel cover, most of its general insurance offerings are underwritten by an insurer known as Auto & General.

“Auto & General is a subsidiary of global financial services group Budget Holdings Limited and is the ultimate owner of comparethemarket.com.au. An investigation by The New Daily of the brands marketed by the website for auto and home insurance reveals a disproportionate weighting to A&G products.”

Once again, the AFR, this time through columnist Myriam Robyn, was not exactly complimentary about Alberici’s new billet. But perhaps it is a better fit for Alberici than the ABC where she was really a square peg in a round hole. She is writing a book in which, no doubt, she will again try to put her case forward and prove she was a brilliant journalist who lost her job due to other people politicking against her. But the facts say otherwise.

Worse Than FailureCodeSOD: A Poor Facsimile

For every leftpad debacle, there are a thousand “utility belt” libraries for JavaScript. Whether it’s the “venerable” JQuery, or lodash, or maybe Google’s Closure library, there’s a pile of things that usually end up in a 50,000 line util.js file available for use, and packaged up a little more cleanly.

Dan B had a co-worker who really wanted to start using Closure. But they also wanted to be “abstract”, and “loosely coupled”, so that they could swap out implementations of these utility functions in the future.

This meant that they wrote a lot of code like:

initech.util.clamp = function(value, min, max) {
  return goog.math.clamp(value, min, max);
}

But it wasn’t enough to just write loads of functions like that. This co-worker “knew” they’d need to provide their own implementations for some methods, reflecting how the utility library couldn’t always meet their specific business needs.

Unfortunately, Dan noticed some of those “reimplementations” didn’t always behave correctly, for example:

initech.util.isDefAndNotNull(null); //returns true

Let’s look at the Google implementations of some methods, annotated by Dan:

goog.isDef = function(val) {
  return val !== void 0; // tests for undefined
}
goog.isDefAndNotNull = function(val) {
  return val != null; // Also catches undefined since ==/!= instead of ===/!== allows for implicit type conversion to satisfy the condition.
}

Setting aside the problems of coercing vs. non-coercing comparison operators even being a thing, these both do the job they’re supposed to do. But Dan’s co-worker needed to reinvent this particular wheel.

initech.util.isDef = function (val) {
  return val !== void 0; 
}

This code is useless, since we already have it in our library, but whatever. It’s fine and correct.

initech.util.isDefAndNotNull = initech.util.isDef; 

This developer is the kid who copied someone else’s homework and still somehow managed to get a failing grade. They had a working implementation they could have referenced to see what they needed to do. The names of the methods themselves should have probably provided a hint that they needed to be mildly different. There was no reason to write any of this code, and they still couldn’t get that right.

Dan adds:

Thankfully a lot of this [functionality] was recreated in C# for another project…. I’m looking to bring all that stuff back over… and just destroy this monstrosity we created. Goodbye JavaScript!

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityWho’s Behind Monday’s 14-State 911 Outage?

Emergency 911 systems were down for more than an hour on Monday in towns and cities across 14 U.S. states. The outages led many news outlets to speculate the problem was related to Microsoft‘s Azure web services platform, which also was struggling with a widespread outage at the time. However, multiple sources tell KrebsOnSecurity the 911 issues stemmed from some kind of technical snafu involving Intrado and Lumen, two companies that together handle 911 calls for a broad swath of the United States.

Image: West.com

On the afternoon of Monday, Sept. 28, several states including Arizona, California, Colorado, Delaware, Florida, Illinois, Indiana, Minnesota, Nevada, North Carolina, North Dakota, Ohio, Pennsylvania and Washington reported 911 outages in various cities and localities.

Multiple news reports suggested the outages might have been related to an ongoing service disruption at Microsoft. But a spokesperson for the software giant told KrebsOnSecurity, “we’ve seen no indication that the multi-state 911 outage was a result of yesterday’s Azure service disruption.”

Inquiries made with emergency dispatch centers at several of the towns and cities hit by the 911 outage pointed to a different source: Omaha, Neb.-based Intrado — until last year known as West Safety Communications — a provider of 911 and emergency communications infrastructure, systems and services to telecommunications companies and public safety agencies throughout the country.

Intrado did not respond to multiple requests for comment. But according to officials in Henderson County, NC, which experienced its own 911 failures yesterday, Intrado said the outage was the result of a problem with an unspecified service provider.

“On September 28, 2020, at 4:30pm MT, our 911 Service Provider observed conditions internal to their network that resulted in impacts to 911 call delivery,” reads a statement Intrado provided to county officials. “The impact was mitigated, and service was restored and confirmed to be functional by 5:47PM MT.  Our service provider is currently working to determine root cause.”

The service provider referenced in Intrado’s statement appears to be Lumen, a communications firm and 911 provider that until very recently was known as CenturyLink Inc. A look at the company’s status page indicates multiple Lumen systems experienced total or partial service disruptions on Monday, including its private and internal cloud networks and its control systems network.

Lumen’s status page indicates the company’s private and internal cloud and control system networks had outages or service disruptions on Monday.

In a statement provided to KrebsOnSecurity, Lumen blamed the issue on Intrado.

“At approximately 4:30 p.m. MT, some Lumen customers were affected by a vendor partner event that impacted 911 services in AZ, CO, NC, ND, MN, SD, and UT,” the statement reads. “Service was restored in less than an hour and all 911 traffic is routing properly at this time. The vendor partner is in the process of investigating the event.”

It may be no accident that both of these companies are now operating under new names, as this would hardly be the first time a problem between the two of them has disrupted 911 access for a large number of Americans.

In 2019, Intrado/West and CenturyLink agreed to pay $575,000 to settle an investigation by the Federal Communications Commission (FCC) into an Aug. 2018 outage that lasted 65 minutes. The FCC found that incident was the result of a West Safety technician bungling a configuration change to the company’s 911 routing network.

On April 6, 2014, some 11 million people across the United States were disconnected from 911 services for eight hours thanks to an “entirely preventable” software error tied to Intrado’s systems. The incident affected 81 call dispatch centers, rendering emergency services inoperable in all of Washington and parts of North Carolina, South Carolina, Pennsylvania, California, Minnesota and Florida.

According to a 2014 Washington Post story about a subsequent investigation and report released by the FCC, that issue involved a problem with the way Intrado’s automated system assigns a unique identifying code to each incoming call before passing it on to the appropriate “public safety answering point,” or PSAP.

“On April 9, the software responsible for assigning the codes maxed out at a pre-set limit,” The Post explained. “The counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure.”

Compounding the length of the 2014 outage, the FCC found, was that the Intrado server responsible for categorizing and keeping track of service interruptions classified them as “low level” incidents that were never flagged for manual review by human beings.

The FCC ultimately fined Intrado and CenturyLink $17.4 million for the multi-state 2014 outage. An FCC spokesperson declined to comment on Monday’s outage, but said the agency was investigating the incident.

Planet DebianMike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 03)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri

Over the past 4 months I worked on the following bits and pieces regarding Lomiri in Debian:

  • Work on lomiri-app-launch (Debian packaging, upstream work, upload to Debian)
  • Fork lomiri-url-dispatcher from url-dispatcher (upstream work)
  • Upload lomiri-url-dispatcher to Debian
  • Fork out suru-icon-theme and make it its own upstream project
  • Package and upload suru-icon-theme to Debian
  • First glance at lomiri-ui-toolkit (currently FTBFS, needs to be revisited)
  • Update of Mir (1.7.0 -> 1.8.0) in Debian
  • Fix net-cpp FTBFS in Debian
  • Fix FTBFS in gsettings-qt.
  • Fix FTBFS in mir (support of binary-only and arch-indep-only builds)
  • Coordinate with Marius Gripsgard and Robert Tari on shift over from Ubuntu Indicator to Ayatana Indicators
  • Upload ayatana-indicator-* (and libraries) to Debian (new upstream releases)
  • Package and upload to Debian: qmenumodel (still in Debian's NEW queue)
  • Package and upload to Debian: ayatana-indicator-sound
  • Symbol-Updates (various packages) for non-standard architectures
  • Fix FTBFS of qtpim-opensource-src in Debian since Qt5.14 had landed in unstable
  • Fix FTBFS on non-standard architectures of qtsystems, qtpim and qtfeedback
  • Fix wlcs in Debian (for non-standard architectures), more Symbol-Updates (esp. for the mir DEB package)
  • Symbol-Updates (mir, fix upstream tinkering with debian/libmiral3.symbols)
  • Fix FTBFS in lomiri-url-dispatcher against Debian unstable, file merge request upstream
  • Upstream release of qtmir 0.6.1 (via merge request)
  • Improve check_whitespace.py script as used in lomiri-api to ignore debian/ subfolder
  • Upstream release of lomiri-api 0.1.1 and upload to Debian unstable.

The next two big projects / packages ahead are lomiri-ui-toolkit and qtmir.

Credits

Many big thanks go to Marius and Dalton for their work on the UBports project and being always available for questions, feedback, etc.

Thanks to Ratchanan Srirattanamet for providing some of his time for debugging some non-thread safe unit tests (currently unsure, what package we actually looked at...).

Thanks for Florian Leeber for being my point of contact for topcis regarding my cooperation with the UBports Foundation.

Previous Posts about my Debian UBports Team Efforts

Planet DebianVincent Bernat: Speeding up bgpq4 with IRRd in a container

When building route filters with bgpq4 or bgpq3, the speed of rr.ntt.net or whois.radb.net can be a bottleneck. Updating many filters may take several tens of minutes, depending on the load:

$ time bgpq4 -h whois.radb.net AS-HURRICANE | wc -l
909869
1.96s user 0.15s system 2% cpu 1:17.64 total
$ time bgpq4 -h rr.ntt.net AS-HURRICANE | wc -l
927865
1.86s user 0.08s system 12% cpu 14.098 total

A possible solution is to have your own IRRd instance in your network, mirroring the main routing registries. A close alternative is to bundle IRRd with all the data in a ready-to-use Docker image. This also has the advantage of easy integration into a Docker-based CI/CD pipeline.

$ git clone https://github.com/vincentbernat/irrd-legacy.git -b blade/master
$ cd irrd-legacy
$ docker build . -t irrd-snapshot:latest
[…]
Successfully built 58c3e83a1d18
Successfully tagged irrd-snapshot:latest
$ docker container run --rm --detach --publish=43:43 irrd-snapshot
4879cfe7413075a0c217089dcac91ed356424c6b88808d8fcb01dc00eafcc8c7
$ time bgpq4 -h localhost AS-HURRICANE | wc -l
904137
1.72s user 0.11s system 96% cpu 1.881 total

The Dockerfile contains three stages:

  1. building IRRd,1
  2. retrieving various IRR databases, and
  3. assembling the final container with the result of the two previous stages.

The second stage fetches the databases used by rr.ntt.net: NTTCOM, RADB, RIPE, ALTDB, BELL, LEVEL3, RGNET, APNIC, JPIRR, ARIN, BBOI, TC, AFRINIC, ARIN-WHOIS, and REGISTROBR. However, it misses RPKI2. Feel free to adapt!

The image can be scheduled to be rebuilt daily or weekly, depending of your needs. The repository includes a .gitlab-ci.yaml file automating the build and triggering the compilation of all filters by your CI/CD upon success.


  1. Instead of using the latest version of IRRd, the image relies on an older version that does not require a PostgreSQL instance and uses flat files instead. ↩︎

  2. Unlike the others, the RPKI database is built from the published RPKI ROAs↩︎

Worse Than FailureCodeSOD: Taking Your Chances

A few months ago, Sean had some tasks around building a front-end for some dashboards. Someone on the team picked a UI library for managing their widgets. It had lovely features like handling flexible grid layouts, collapsing/expanding components, making components full screen and even drag-and-drop widget rearrangement.

It was great, until one day it wasn't. As more and more people started using the front end, they kept getting more and more reports about broken dashboards, misrendering, duplicated widgets, and all sorts of other difficult to explain bugs. These were bugs which Sean and the other developers had a hard time replicating, and even on the rare occassions that they did replicate it, they couldn't do it twice in a row.

Stumped, Sean took a peek at the code. Each on-screen widget was assigned a unique ID. Except at those times when the screen broke- the IDs weren't unique. So clearly, something went wrong with the ID generation, which seemed unlikely. All the IDs were random-junk, like 7902699be4, so there shouldn't be a collision, right?

generateID() { return sha256(Math.floor(Math.random() * 100)). substring(1, 10); }

Good idea: feeding random values into a hash function to generate unique IDs. Bad idea: constraining your range of random values to integers between 0 and 99.

SHA256 will take inputs like 0, and output nice long, complex strings like "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9". And a mildly different input will generate a wildly different output. This would be good for IDs, if you had a reasonable expectation that the values you're seeding are themselves unique. For this use case, Math.random() is probably good enough. Even after trimming to the first ten characters of the hash, I only hit two collisions for every million IDs in some quick tests. But Math.floor(Math.random() * 100) is going to give you a collision a lot more frequently. Even if you have a small number of widgets on the screen, a collision is probable. Just rare enough to be hard to replicate, but common enough to be a serious problem.

Sean did not share what they did with this library. Based on my experience, it was probably "nothing, we've already adopted it" and someone ginned up an ugly hack that patches the library at runtime to replace the method. Despite the library being open source, nobody in the organization wants to maintain a fork, and legal needs to get involved if you want to contribute back upstream, so just runtime patch the object to replace the method with one that works.

At least, that's a thing I've had to do in the past. No, I'm not bitter.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianNorbert Preining: Performance with Intel i218/i219 NIC

I always had the feeling that my server, hosted by Hetzner, somehow has a slow internet connection. Then, I did put it on the distance between Finland and Japan, and didn’t care too much. Until yesterday my server stopped reacting to pings/ssh, and needed a hard reset. It turned out that the server was running fine, only that the ethernet card did hang. Hetzner support answered promptly and directed me to this web page, which described a change in the kernel concerning fragmentation offloading, and suggested the following configuration to regain connection speed:

ethtool -K <interface> tso off gso off

And to my surprise, this simple thing did wonder, and the connection speed improved dramatically, even from Japan (something like factor 10 in large rsync transfers). I have added this incantation to system cron tab and run it every hour, just to be sure that even after a reboot it is fine.

If you have bad connection speed with this kind of ethernet card, give it a try.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 16)

Here’s part sixteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Cryptogram Negotiating with Ransomware Gangs

Really interesting conversation with someone who negotiates with ransomware gangs:

For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) ­ and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination ­ almost like a cost-benefit analysis.

The arguments for rendering a ransomware payment include:

  • Payment is the least costly option;
  • Payment is in the best interest of stakeholders (e.g. a hospital patient in desperate need of an immediate operation whose records are locked up);
  • Payment can avoid being fined for losing important data;
  • Payment means not losing highly confidential information; and
  • Payment may mean not going public with the data breach.

The arguments against rendering a ransomware payment include:

  • Payment does not guarantee that the right encryption keys with the proper decryption algorithms will be provided;
  • Payment further funds additional criminal pursuits of the attacker, enabling a cycle of ransomware crime;
  • Payment can do damage to a corporate brand;
  • Payment may not stop the ransomware attacker from returning;
  • If victims stopped making ransomware payments, the ransomware revenue stream would stop and ransomware attackers would have to move on to perpetrating another scheme; and
  • Using Bitcoin to pay a ransomware attacker can put organizations at risk. Most victims must buy Bitcoin on entirely unregulated and free-wheeling exchanges that can also be hacked, leaving buyers’ bank account information stored on these exchanges vulnerable.

When confronted with a ransomware attack, the options all seem bleak. Pay the hackers ­ and the victim may not only prompt future attacks, but there is also no guarantee that the hackers will restore a victim’s dataset. Ignore the hackers ­ and the victim may incur significant financial damage or even find themselves out of business. The only guarantees during a ransomware attack are the fear, uncertainty and dread inevitably experienced by the victim.

Cryptogram Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

Kevin RuddBBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge

E&OE TRANSCRIPT
TV INTERVIEW
BBC World
26 SEPTEMBER 2020

 

Mike Embley
Let’s speak to a man who has a lot of experience with China. Kevin Rudd, former Prime Minister of Australia, now president of the Asia Society, Policy Institute. Mr. Rudd, is that headline grabbing thing to say, isn’t it? the United Nations General Assembly? Do you think in a very directed economy with a population that’s given very little individual choice, it is possible? And if so, how?

Kevin Rudd
I think it is possible. And the reason I say so is because China does have a state planning system. And on top of that, China has concluded, I think that it’s in its own national interests, to bring about a radical reduction in carbon over time. So the two big announcements coming out of Xi Jinping at the UN General Assembly have been A, the one you’ve just referred to, which is to achieve carbon neutrality before 2060. And also for China to reach what’s called peak greenhouse gas emissions before 2030. The international community have been pressuring them to bring that forward as much as possible, respectively, we hope to close to 2050. And we hope close to 2025, as far as peak emissions are concerned, but we’ll have to wait and see until China produces its 14th five year plan.

Mike Embley
I guess we can probably assume that President Trump wasn’t impressed, he’s made his position pretty clear on these matters. How do you think other countries will respond? Or will they just say, well, that’s China, China can do that we really can’t?

Kevin Rudd
Well, I think you’re right to characterize President Trump’s reaction as dismissive, because ultimately, he does not seem to be persuaded at all by the climate change science. And the United States under his presidency has been completely missing in action, in terms of global work on climate change, mitigation activity. But when you look at the other big player on global climate change actions, the European Union, and I think positively speaking, both the European Union and working as they did, through their virtual discussions with the Chinese leadership last week, had been encouraging directly the Chinese to take these sorts of actions. I think the Europeans will be pleased by what they have heard. But there’s a caveat to this as well. One of the other things that Europeans asked is for China, not just to make concrete its commitments in terms of peak emissions and carbon neutrality, but also not to offshore China’s responsibilities by building a lot of carbon based or fossil fuel based power generation plants in the Belt and Road Initiative countries beyond China’s borders. That’s where we have to see further action by China as well.

Mike Embley
Just briefly, if you can, you know, there will be people yelling at the screen, given how fast climate change is becoming a climate emergency, that 2050-2060 is nowhere near soon enough that we have to inflict really massive changes on ourselves.

Kevin Rudd
Well, the bottom line is that if you look at the science, it requires us to be carbon neutral by 2050. As a planet, given the China is the world’s largest emitter by a country mile, then I’d say to those throwing shoes at the screen, getting China to that position, given where they were, frankly 10 years ago when I first began negotiating with the Chinese leadership on climate change matters is a big step forward. And secondly, what I find most encouraging though, is that China’s national political interests, I think, have been engaged. They don’t want to suffocate their own national economic future, any more than they want to suffocate the world’s.

The post BBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge appeared first on Kevin Rudd.

Kevin RuddCNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites

E&OE TRANSCRIPT
TV INTERVIEW
CNN
25 SEPTEMBER 2020

The post CNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites appeared first on Kevin Rudd.

LongNowCharting Earth’s (Many) Mass Extinctions

How many mass extinctions has the Earth had, really? Most people talk today as if it’s five, but where one draws the line determines everything, and some say over twenty.

However many it might be, new mass extinctions seem to reveal themselves with shocking frequency. Just last year researchers argued for another giant die-off just preceding the Earth’s worst, the brutal end-Permian.

Now, one more has come into focus as stratigraphy improves. With that, four of the most awful things that life has ever suffered all came down (in many cases, literally, as volcanic ash) within 60 million years. Not a great span of time with which to spin the wheel, in case your time machine is imprecise…

Planet DebianKentaro Hayashi: dnsZoneEntry: field should be removed when DD is retired

It is known that Debian Developer can setup *.debian.net.

wiki.debian.org

When Debian Developer had retired, actual DNS entry is removed, but dnsZoneEntry: field is kept on LDAP (db.debian.org)

So you can not reuse *.debian.net if retired Debian Developer owns your prefered subdomain already.

I've posted question about this current undocumented specification.

lists.debian.org

Planet DebianVincent Bernat: Syncing RIPE, ARIN and APNIC objects with a custom Ansible module

Internet is split into five regional Internet registry: AFRINIC, ARIN, APNIC, LACNIC and RIPE. Each RIR maintains an Internet Routing Registry. An IRR allows one to publish information about the routing of Internet number resources.1 Operators use this to determine the owner of an IP address and to construct and maintain routing filters. To ensure your routes are widely accepted, it is important to keep the prefixes you announce up-to-date in an IRR.

There are two common tools to query this database: whois and bgpq4. The first one allows you to do a query with the WHOIS protocol:

$ whois -BrG 2a0a:e805:400::/40
[…]
inet6num:       2a0a:e805:400::/40
netname:        FR-BLADE-CUSTOMERS-DE
country:        DE
geoloc:         50.1109 8.6821
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:58Z
last-modified:  2020-05-19T08:04:58Z
source:         RIPE

route6:         2a0a:e805:400::/40
descr:          Blade IPv6 - AMS1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

The second one allows you to build route filters using the information contained in the IRR database:

$ bgpq4 -6 -S RIPE -b AS64476
NN = [
    2a0a:e805::/40,
    2a0a:e805:100::/40,
    2a0a:e805:300::/40,
    2a0a:e805:400::/40,
    2a0a:e805:500::/40
];

There is no module available on Ansible Galaxy to manage these objects. Each IRR has different ways of being updated. Some RIRs propose an API but some don’t. If we restrict ourselves to RIPE, ARIN and APNIC, the only common method to update objects is email updates, authenticated with a password or a GPG signature.2 Let’s write a custom Ansible module for this purpose!

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a more instructive example.

Code

The module takes a list of RPSL objects to synchronize and returns the body of an email update if a change is needed:

- name: prepare RIPE objects
  irr_sync:
    irr: RIPE
    mntner: fr-blade-1-mnt
    source: whois-ripe.txt
  register: irr

Prerequisites

The source file should be a set of objects to sync using the RPSL language. This would be the same content you would send manually by email. All objects should be managed by the same maintainer, which is also provided as a parameter.

Signing and sending the result is not the responsibility of this module. You need two additional tasks for this purpose:

- name: sign RIPE objects
  shell:
    cmd: gpg --batch --user noc@example.com --clearsign
    stdin: "{{ irr.objects }}"
  register: signed
  check_mode: false
  changed_when: false

- name: update RIPE objects by email
  mail:
    subject: "NEW: update for RIPE"
    from: noc@example.com
    to: "auto-dbm@ripe.net"
    cc: noc@example.com
    host: smtp.example.com
    port: 25
    charset: us-ascii
    body: "{{ signed.stdout }}"

You also need to authorize the PGP keys used to sign the updates by creating a key-cert object and adding it as a valid authentication method for the corresponding mntner object:

key-cert:  PGPKEY-A791AAAB
certif:    -----BEGIN PGP PUBLIC KEY BLOCK-----
certif:    
certif:    mQGNBF8TLY8BDADEwP3a6/vRhEERBIaPUAFnr23zKCNt5YhWRZyt50mKq1RmQBBY
[]
certif:    -----END PGP PUBLIC KEY BLOCK-----
mnt-by:    fr-blade-1-mnt
source:    RIPE

mntner:    fr-blade-1-mnt
[]
auth:      PGPKEY-A791AAAB
mnt-by:    fr-blade-1-mnt
source:    RIPE

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    irr=dict(type='str', required=True),
    mntner=dict(type='str', required=True),
    source=dict(type='path', required=True),
)

result = dict(
    changed=False,
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

Getting existing objects

To grab existing objects, we use the whois command to retrieve all the objects from the provided maintainer.

# Per-IRR variations:
# - whois server
whois = {
    'ARIN': 'rr.arin.net',
    'RIPE': 'whois.ripe.net',
    'APNIC': 'whois.apnic.net'
}
# - whois options
options = {
    'ARIN': ['-r'],
    'RIPE': ['-BrG'],
    'APNIC': ['-BrG']
}
# - objects excluded from synchronization
excluded = ["domain"]
if irr == "ARIN":
    # ARIN does not return these objects
    excluded.extend([
        "key-cert",
        "mntner",
    ])

# Grab existing objects
args = ["-h", whois[irr],
        "-s", irr,
        *options[irr],
        "-i", "mnt-by",
        module.params['mntner']]
proc = subprocess.run("whois", *args, capture_output=True)
if proc.returncode != 0:
    raise AnsibleError(
        f"unable to query whois: {args}")
output = proc.stdout.decode('ascii')
got = extract(output, excluded)

The first part of the code setup some IRR-specific constants: the server to query, the options to provide to the whois command and the objects to exclude from synchronization. The second part invokes the whois command, requesting all objects whose mnt-by field is the provided maintainer. Here is an example of output:

$ whois -h whois.ripe.net -s RIPE -BrG -i mnt-by fr-blade-1-mnt
[…]

inet6num:       2a0a:e805:300::/40
netname:        FR-BLADE-CUSTOMERS-FR
country:        FR
geoloc:         48.8566 2.3522
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:59Z
last-modified:  2020-05-19T08:04:59Z
source:         RIPE

[…]

route6:         2a0a:e805:300::/40
descr:          Blade IPv6 - PA1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

[…]

The result is passed to the extract() function. It parses and normalizes the results into a dictionary mapping object names to objects. We store the result in the got variable.

def extract(raw, excluded):
    """Extract objects."""
    # First step, remove comments and unwanted lines
    objects = "\n".join([obj
                         for obj in raw.split("\n")
                         if not obj.startswith((
                                 "#",
                                 "%",
                         ))])
    # Second step, split objects
    objects = [RPSLObject(obj.strip())
               for obj in re.split(r"\n\n+", objects)
               if obj.strip()
               and not obj.startswith(
                   tuple(f"{x}:" for x in excluded))]
    # Last step, put objects in a dict
    objects = {repr(obj): obj
               for obj in objects}
    return objects

RPSLObject() is a class enabling normalization and comparison of objects. Look at the module code for more details.

>>> output="""
... inet6num:       2a0a:e805:300::/40
... […]
... """
>>> pprint({k: str(v) for k,v in extract(output, excluded=[])})
{'<Object:inet6num:2a0a:e805:300::/40>':
   'inet6num:       2a0a:e805:300::/40\n'
   'netname:        FR-BLADE-CUSTOMERS-FR\n'
   'country:        FR\n'
   'geoloc:         48.8566 2.3522\n'
   'admin-c:        BN2763-RIPE\n'
   'tech-c:         BN2763-RIPE\n'
   'status:         ASSIGNED\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE',
 '<Object:route6:2a0a:e805:300::/40>':
   'route6:         2a0a:e805:300::/40\n'
   'descr:          Blade IPv6 - PA1\n'
   'origin:         AS64476\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE'}

Comparing with wanted objects

Let’s build the wanted dictionary using the same structure, thanks to the extract() function we can use verbatim:

with open(module.params['source']) as f:
    source = f.read()
wanted = extract(source, excluded)

The next step is to compare got and wanted to build the diff object:

if got != wanted:
    result['changed'] = True
    if module._diff:
        result['diff'] = [
            dict(before_header=k,
                 after_header=k,
                 before=str(got.get(k, "")),
                 after=str(wanted.get(k, "")))
            for k in set((*wanted.keys(), *got.keys()))
            if k not in wanted or k not in got or wanted[k] != got[k]]

Returning updates

The module does not have a side effect. If there is a difference, we return the updates to send by email. We choose to include all wanted objects in the updates (contained in the source variable) and let the IRR ignore unmodified objects. We also append the objects to be deleted by adding a delete: attribute to each them them.

# We send all source objects and deleted objects.
deleted_mark = f"{'delete:':16}deleted by CMDB"
deleted = "\n\n".join([f"{got[k].raw}\n{deleted_mark}"
                       for k in got
                       if k not in wanted])
result['objects'] = f"{source}\n\n{deleted}"

module.exit_json(**result)

The complete code is available on GitHub. The module supports both --diff and --check flags. It does not return anything if no change is detected. It can work with APNIC, RIPE and ARIN. It is not perfect: it may not detect some changes,3 it is not able to modify objects not owned by the provided maintainer4 and some attributes cannot be modified, requiring to manually delete and recreate the updated object.5 However, this module should automate 95% of your needs.


  1. Other IRRs exist without being attached to a RIR. The most notable one is RADb↩︎

  2. ARIN is phasing out this method in favor of IRR-online. RIPE has an API available, but email updates are still supported and not planned to be deprecated. APNIC plans to expose an API↩︎

  3. For ARIN, we cannot query key-cert and mntner objects and therefore we cannot detect changes in them. It is also not possible to detect changes to the auth mechanisms of a mntner object. ↩︎

  4. APNIC do not assign top-level objects to the maintainer associated with the owner. ↩︎

  5. Changing the status of an inetnum object requires deleting and recreating the object. ↩︎

Worse Than FailureThe Watchdog Hydra

Ammar A uses Node to consume an HTTP API. The API uses cookies to track session information, and ensures those cookies expire, so clients need to periodically re-authenticate. How frequently?

That's an excellent question, and one that neither Ammar nor the docs for this API can answer. The documentation provides all the clarity and consistency of a religious document, which is to say you need to take it on faith that these seemingly random expirations happen for a good reason.

Ammar, not feeling particularly faithful, wrote a little "watchdog" function. Once you log in, every thirty seconds it tries to fetch a URL, hopefully keeping the session alive, or, if it times out, starts a new session.

That's what it was supposed to do, but Ammar made a mistake.

function login(cb) { request.post({url: '/login', form:{username: config.username, password: config.password}, function(err, response) { if (err) return cb(err) if (response.statusCode != 200) return cb(response.statusCode); console.log("[+] Login succeeded"); setInterval(watchdog, 30000); return cb(); }) } function watchdog() { // Random request to keep session alive request.get({url: '/getObj', form:{id: 1}, (err, response)=>{ if (!err && response.statusCode == 200) return console.log("[+] Watchdog ping succeeded"); console.error("[-] Watchdog encountered error, session may be dead"); login((err)=>{ if (err) console.error("[-] Failed restarting session:",err); }) }) }

login sends the credentials, and if we get a 200 status back, we setInterval to schedule a call to the watchdog every 30,000 milliseconds.

In the watchdog, we fetch a URL, and if it fails, we call login. Which attempts to login, and if it succeeds, schedules the watchdog every 30,000 milliseconds.

Late one night, Ammar got a call from the API vendor's security team. "You are attacking our servers, and not only have you already cut off access to most of our customers, you've caused database corruption! There will be consequences for this!"

Ammar checked, and sure enough, his code was sending hundreds of thousands of requests per second. It didn't take him long to figure out why: requests from the watchdog were failing with a 500 error, so it called the login method. The login method had been succeeding, so another watchdog got scheduled. Thirty seconds later, that failed, as did all the previously scheduled watchdogs, which all called login again. Which, on success, scheduled a fresh round of watchdogs. Every thirty seconds, the number of scheduled calls doubled. Before long, Ammar's code was DoSing the API.

In the aftermath, it turned out that Ammar hadn't caused any database corruption, to the contrary, it was an error in the database which caused all the watchdog calls to fail. The vendor corrupted their own database, without Ammar's help. Coupled with Ammar's careless setInterval management, that error became even more punishing.

Which raises the question: why wasn't the vendor better prepared for this? Ammar's code was bad, sure, a lurking timebomb, just waiting to go off. But any customer could have produced something like that. A hostile entity could easily have done something like that. And somehow, this vendor had nothing in place to deal with what appeared to be a denial-of-service attack, short of calling the customer responsible?

I don't know what was going on with the vendor's operations team, but I suspect that the real WTF is somewhere in there.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianNorbert Preining: Cinnamon for Debian – imminent possible removal from testing

Update 2020-09-30: The post has created considerable movement, and a PR request by Fedora developers to rebase cjs onto more current gjs and libmozjs78 is being tested. I have uploaded packages of cinnamon and cjs to experimental based on these patches (400 files, about 50000 lines of code touched, not what I normally like to have in a debian patch) and would appreciate testing and feedback.

I have been more or less maintaining Cinnamon now for quite some time, but using it only sporadically due to my switch to KDE/Plasma. Currently, Cinnamon’s cjs package depends on mozjs52, which also is probably going to be orphaned soon. This will precipitate a lot of changes, not the least being Cinnamon being removed from Debian/testing.

I have pinged upstream several times, without much success. So for now the future looks bleak for cinnamon in Debian. If there are interested developers (Debian or not), please get in touch with me, or directly try to update cjs to mozjs78.

Planet DebianSteinar H. Gunderson: Introducing plocate

In continued annoyance over locate's slowness, I made my own locate using posting lists (thus the name plocate) and compression, and it turns out that you hardly need any tuning at all to make it fast. Example search on a system with 26M files:

cassarossa:~/nmu/plocate> ls -lh /var/lib/mlocate  
total 1,5G                
-rw-r----- 1 root mlocate 1,1G Sep 27 06:33 mlocate.db
-rw-r----- 1 root mlocate 470M Sep 28 00:34 plocate.db

cassarossa:~/nmu/plocate> time mlocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
mlocate info/mlocate  20.75s user 0.14s system 99% cpu 20.915 total

cassarossa:~/nmu/plocate> time plocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
plocate info/mlocate  0.01s user 0.00s system 83% cpu 0.008 total

It will be slower if files are on rotating rust and not cached, but still much faster then mlocate.

It's a prototype, and freerides off of updatedb from mlocate (mlocate.db is converted to plocate.db). Case-sensitive matches only, no regexes or other funny business. Get it from https://git.sesse.net/?p=plocate (clone with --recursive so that you get the TurboPFOR submodule). GPLv2+.

Planet DebianEnrico Zini: Coup d'état in recent Italian history

Italy during the cold war has always been in too strategic a position, and with too strong a left wing movement, not to get the CIA involved.

Here are a few stories of coup d'état and other kinds of efforts to manipulate Italian politics:

Planet DebianIain R. Learmonth: Multicast IPTV

For almost a decade, I’ve been very slowly making progress on a multicast IPTV system. Recently I’ve made a significant leap forward in this project, and I wanted to write a little on the topic so I’ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn’t possible.

When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I’m now looking at IPv6 rather than IPv4.

Initially, I’d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I’m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC.

I’ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly.

So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that’s where the antenna comes down from the roof. There’s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network.

Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable.

The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode.

So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I’ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

Planet DebianJoachim Breitner: Learn Haskell on CodeWorld writing Sokoban

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban.

As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work.

Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets (“Shame, this was the best Haskell course ever!!!”) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself.

So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to https://haskell-via-sokoban.nomeata.de/ and get started! Improvements can now also be contributed at https://github.com/nomeata/haskell-via-sokoban.

Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

Planet DebianDirk Eddelbuettel: pkgKitten 0.2.0: Now with tinytest and new docs

kitten

A new release 0.2.0 of pkgKitten just hit on CRAN today, or about eleven months after the previous release.

This release brings support for tinytest by having pkgKitten::kitten() automagically call tinytest::puppy() if the latter package is installed (and the user did not opt out of calling it). So your newly created minimal package now also uses a wonderful yet tiny testing framework. We also added a new documentation site using the previously tweeted-about wrapper for Material for MkDocs I really dig. And last but not least we switched to BSPM-based Continued Integration (which I wrote about yesterday in R4 #30) and fixed one bug regarding the default NAMESPACE file.

Changes in version 0.2.0 (2020-09-27)

  • Continuous Integration uses the updated BSPM-based script on Travis and with GitHub Actions (Dirk in #11 plus earlier commits).

  • A new default NAMESPACE file is now installed (Dirk in #12).

  • A package documentation website was added (Dirk in #13).

  • Call tinytest::puppy if installed and not opted out (Dirk in #14).

More details about the package are at the pkgKitten webpage, the (new) pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianAndrew Cater: Final post from media team for the day - most of the ordinary images and live images have been tested

 Winding down slightly - we've worked our way through most of the images and testing. Schweer tested all of the Debian Edu/Skolelinux images for which many thanks

Sledge, RattusRattus, Isy and I have been working pretty much solidly for 10 3/4 hours. There's still some images to build - mips, mipsel and s390x but these are all images that we don't have hardware to test on particularly.

Another good and useful day - bits and pieces done throughout. NOTE: There appear to have been some security updates since the main release this morning so, as ever, it's worth updating machines on a regular basis.

Waiting for the final images to finish building so that we can check the archive for completeness and then publish to the media mirrors. All the best until next time: thanks as ever to Sledge for his invaluable help. See you again in a couple of months in all likelihood. 

A much smaller release: some time in the next month we hope to be able to build and test an Alpha release for Bullseye. Bullseye is likely to be released somewhere round the middle of next year so we'll have additional Buster stable point releases in the meantime.

Planet DebianAndrew Cater: There's a Debian point release for Debian stable happening this weekend - 10.6

 Nothing particularly new or unexpected: there's a point release happening at some point this weekend for Debian stable. Usual rules apply: if you've already got a system current and up to date, there's not much to do but the base files version will change at some point to reflect 10.6 when you next update. 

If you have media from 10.5, you may not _have_ to go and get media this weekend but it's always useful to get new media in due course. There's an updated kernel and an ABI bump. You _will_ need to reboot at some time to use the new kernel image.

This point release will contain security fixes, consequent changes etc. as usual - it is always good and useful to keep machines up to date.

Working with the CD team to eventually test, build and release CD / DVD images and media as and when files gradually become available. As ever, this may take 12-16 hours. As ever, I'll post some blog entries as we go.

Currently "sitting in Cambridge" via video link with Sledge, RattusRattus and Isy who are all involved in the testing and we'll have a great day, as ever.


Planet DebianFrançois Marier: Repairing a corrupt ext4 root partition

I ran into filesystem corruption (ext4) on the root partition of my backup server which caused it to go into read-only mode. Since it's the root partition, it's not possible to unmount it and repair it while it's running. Normally I would boot from an Ubuntu live CD / USB stick, but in this case the machine is using the mipsel architecture and so that's not an option.

Repair using a USB enclosure

I had to pull the shutdown the server and then pull the SSD drive out. I then moved it to an external USB enclosure and connected it to my laptop.

I started with an automatic filesystem repair:

fsck.ext4 -pf /dev/sde2

which failed for some reason and so I moved to an interactive repair:

fsck.ext4 -f /dev/sde2

Once all of the errors were fixed, I ran a full surface scan to update the list of bad blocks:

fsck.ext4 -c /dev/sde2

Finally, I forced another check to make sure that everything was fixed at the filesystem level:

fsck.ext4 -f /dev/sde2

Fix invalid alternate GPT

The other thing I noticed is this messge in my dmesg log:

scsi 8:0:0:0: Direct-Access     KINGSTON  SA400S37120     SBFK PQ: 0 ANSI: 6
sd 8:0:0:0: Attached scsi generic sg4 type 0
sd 8:0:0:0: [sde] 234441644 512-byte logical blocks: (120 GB/112 GiB)
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: 31 00 00 00
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Optimal transfer size 33553920 bytes
Alternate GPT is invalid, using primary GPT.
 sde: sde1 sde2

I therefore checked to see if the partition table looked fine and got the following:

$ fdisk -l /dev/sde
GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 799CD830-526B-42CE-8EE7-8C94EF098D46

Device       Start       End   Sectors   Size Type
/dev/sde1     2048   8390655   8388608     4G Linux swap
/dev/sde2  8390656 234441614 226050959 107.8G Linux filesystem

It turns out that all I had to do, since only the backup / alternate GPT partition table was corrupt and the primary one was fine, was to re-write the partition table:

$ fdisk /dev/sde

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.

Command (m for help): w

The partition table has been altered.
Syncing disks.

Run SMART checks

Since I still didn't know what caused the filesystem corruption in the first place, I decided to do one last check: SMART errors.

I couldn't do this via the USB enclosure since the SMART commands aren't forwarded to the drive and so I popped the drive back into the backup server and booted it up.

First, I checked whether any SMART errors had been reported using smartmontools:

smartctl -a /dev/sda

That didn't show any errors and so I kicked off an extended test:

smartctl -t long /dev/sda

which ran for 30 minutes and then passed without any errors.

The mystery remains unsolved.

Planet DebianDirk Eddelbuettel: #30: Easy, Reliable, Fast and Portable Linux and macOS Continuous Integration

Welcome to the 30th post in the rarified R recommendation resources series or R4 for short. The last post introduced BSPM. In the four weeks since, we have worked some more on BSPM to bring it to the point where it is ready for use with continuous integration. Building on this, it is now used inside the run.sh script that driven our CI use for many years (via the r-travis repo).

Which we actually use right now on three different platforms:

All three use the exact same script facilitating this, and run a ‘matrix’ over Linux and macOS. You read this right: one CI setup that is portable and which you can take to your CI provider of choice. No lock-in or tie-in. Use what works, change at will. Or run on all three if you like burning extra cycles.

This is already used by handful of my repos as well as by at least two repos of friends also deploying r-travis. How does it work? In a nutshell we are

  • downloading run.sh via curl and changing its mode;
  • running run.sh bootstrap which sets the operating system default:
    • on Linux we use Ubuntu,
      • add two PPAs repos for R itself and over 4600 r-cran-* binaries,
      • and enable BSPM to use these from install.packages()
    • on macOS we use the standard setup also used on Travis, GitHub Actions and elsewhere;
    • this provides us with fast, reliable, easy, and portable access to binaries on two OSs under dependency resolution;
  • running run.sh install_deps to install just the requireded Depends:, Imports: and LinkingTo:
  • running run.sh tests to build the tarball and test it via R CMD check --as-cran.

There are several customizations that are possible via environment variables

  • additional PPAs or drat repos can be added to offer even more package choice;
  • alternatively one could run run.sh install_all to also install Suggests:;
  • optionally one could run run.sh install_r pkgA pkgB ... to install packages explicitly listed;
  • optionally one could also run run.sh install_aptget r-cran-pkga r-cran-pkgb otherpackage to add more Ubuntu binaries.

We find this setup compelling. The scheme is simple: there really is just one shell script behind it which can also be downloaded and altered. The scheme is also portable as we can (as shown) rotate between CI provides. The scheme is also more flexible: in case of debugging needs one can simply run the script on a local Docker or VM instance. Lastly, the scheme moves away from single points of failure or breakage.

Currently the script uses only BSPM as I had the hunch that it would a) work and b) be compelling. Adding support for RSPM would be equally easy, but I have no immediate need to do so. Adding BioConductor installation may be next. That is easy when BioConductor uses r-release; it may be little more challenging under r-devel to but it should work too. Stay tuned.

In the meantime, if the above sounds compelling, give run.sh from r-travis a go!

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndrew Cater: Chunking through the tests for various media images ...

We're working our way through some of the CD/DVD/Blu-Ray media images, doing test installs, noting failures and so on. It's repetitive work but vital if we're going to provide some assurance that folk can install from the images we make. 

There's always the few things that catch us out and there's always something to note for next time. Schweer has joined us and is busy chasing down debian-edu/Skolelinux installs from Germany. We're getting there, one way and another, and significantly ahead of where we were last time around when the gremlins got in and delayed us. All good :)


Planet DebianAndrew Cater: There are things that money can't buy - and sensible Debian colleagues are worth gold and diamonds :)

 Participating in the Debian media testing on debian-cd. One of my colleagues has just spent time to sort out an email issue having spent a couple of hours with me the other night. I now have good, working email for the first time in years - I can't value that highly enough.

Sledge, RattusRattus, Isy and myself are all engaged in testing various CD images. At the same time, they're debugging a new application to save us from wiki problems when we do this - and we're also able to use a video link which is really handy to chat backwards and forwards and means I can sit virtually in Cambridge :)

Lots of backchat and messages flying backwards and forwards - couldn't wish for a better way to spend an afternoon with friends.



LongNowThe Language Keepers Podcast

A six-part podcast from Emergence Magazine explores the plight of four Indigenous languages spoken in California—Tolowa Dee-ni’, Karuk, Wukchumni, and Kawaiisu—among the most vulnerable in the world:

“Two centuries ago, as many as ninety languages and three hundred dialects were spoken in California; today, only half of these languages remain. In Episode One, we are introduced to the language revitalization efforts of these four Indigenous communities. Through their experiences, we examine the colonizing histories that brought Indigenous languages to the brink of disappearance and the struggle for Indigenous cultural survival in America today.”

Cryptogram Friday Squid Blogging: COVID-19 Found on Chinese Squid Packaging

I thought the virus doesn’t survive well on food packaging:

Authorities in China’s northeastern Jilin province have found the novel coronavirus on the packaging of imported squid, health authorities in the city of Fuyu said on Sunday, urging anyone who may have bought it to get themselves tested.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

,

Planet DebianRussell Coker: Bandwidth for Video Conferencing

For the Linux Users of Victoria (LUV) I’ve run video conferences on Jitsi and BBB (see my previous post about BBB vs Jitsi [1]). One issue with video conferences is the bandwidth requirements.

The place I’m hosting my video conference server has a NBN link with allegedly 40Mb/s transmission speed and 100Mb/s reception speed. My tests show that it can transmit at about 37Mb/s and receive at speeds significantly higher than that but also quite a bit lower than 100Mb/s (around 60 or 70Mb/s). For a video conference server you have a small number of sources of video and audio and a larger number of targets as usually most people will have their microphones muted and video cameras turned off. This means that the transmission speed is the bottleneck. In every test the reception speed was well below half the transmission speed, so the tests confirmed my expectation that transmission was the only bottleneck, but the reception speed was higher than I had expected.

When we tested bandwidth use the maximum upload speed we saw was about 4MB/s (32Mb/s) with 8+ video cameras and maybe 20 people seeing some of the video (with a bit of lag). We used 3.5MB/s (28Mb/s) when we only had 6 cameras which seemed to be the maximum for good performance.

In another test run we had 4 people all sending video and the transmission speed was about 260KB/s.

I don’t know how BBB manages the small versions of video streams. It might reduce the bandwidth when the display window is smaller.

I don’t know the resolutions of the cameras. When you start sending video in BBB you are prompted for the “quality” with “medium” being default. I don’t know how different camera hardware and different choices about “quality” affect bandwidth.

These tests showed that for the cameras we had available a small group of people video chatting a 100/40 NBN link (the fastest Internet link in Australia that’s not really expensive) a small group of people can be all sending video or a medium size group of people can watch video streams from a small group.

For meetings of the typical size of LUV meetings we won’t have a bandwidth problem.

There is one common case that I haven’t yet tested, where there is a single video stream that many people are watching. If 4 people are all sending video with 260KB/s transmission bandwidth then 1 person could probably send video to 4 for 65KB/s. Doing some simple calculations on those numbers implies that we could have 1 person sending video to 240 people without running out of bandwidth. I really doubt that would work, but further testing is needed.

Krebs on SecurityWho is Tech Investor John Bernard?

John Bernard, the subject of a story here last week about a self-proclaimed millionaire investor who has bilked countless tech startups, appears to be a pseudonym for John Clifton Davies, a U.K. man who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his wife on their honeymoon in India.

The Private Office of John Bernard, which advertises itself as a capital investment firm based in Switzerland, has for years been listed on multiple investment sites as the home of a millionaire who made his fortunes in the dot-com boom 20 years ago and who has oodles of cash to invest in tech startups.

But as last week’s story noted, Bernard’s investment company is a bit like a bad slot machine that never pays out. KrebsOnSecurity interviewed multiple investment brokers who all told the same story: After promising to invest millions after one or two phone calls and with little or no pushback, Bernard would insist that companies pay tens of thousands of dollars worth of due diligence fees up front.

However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Neither Mr. Bernard nor anyone from his various companies responded to multiple requests for comment over the past few weeks. What’s more, virtually all of the employee profiles tied to Bernard’s office have since last week removed those firms from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether.

Sometime on Thursday John Bernard’s main website — the-private-office.ch — replaced the content on its homepage with a note saying it was closing up shop.

“We are pleased to announce that we are currently closing The Private Office fund as we have reached our intended investment level and that we now plan to focus on helping those companies we have invested into to grow and succeed,” the message reads.

As noted in last week’s story, the beauty of a scam like the one multiple investment brokers said was being run by Mr. Bernard is that companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

Also, John Bernard’s office typically did not reach out to investment brokers directly. Rather, he had his firm included on a list of angel investors focused on technology companies, so those seeking investments usually came to him.

Finally, multiple sources interviewed for this story said Bernard’s office offered a finders fee for any investment leads that brokers brought his way. While such commissions are not unusual, the amount promised — five percent of the total investment in a given firm that signed an agreement — is extremely generous. However, none of the investment brokers who spoke to KrebsOnSecurity were able to collect those fees, because Bernard’s office never actually consummated any of the deals they referred to him.

PAY NO ATTENTION TO THE EMPTY BOOKSHELVES

After last week’s story ran, KrebsOnSecurity heard from a number of other investment brokers who had near identical experiences with Bernard. Several said they at one point spoke with him via phone or Zoom conference calls, and that he had a distinctive British accent.

When questioned about why his staff was virtually all based in Ukraine when his companies were supposedly in Switzerland, Bernard replied that his wife was Ukrainian and that they were living there to be closer to her family.

One investment broker who recently got into a deal with Bernard shared a screen shot from a recent Zoom call with him. That screen shot shows Bernard bears a striking resemblance to one John Clifton Davies, a 59-year-old from Milton Keynes, a large town in Buckinghamshire, England about 50 miles (80 km) northwest of London.

John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

In 2015, Mr. Davies was convicted of stealing more than GBP 750,000 from struggling companies looking to restructure their debt. For at least seven years, Davies ran multiple scam businesses that claimed to provide insolvency consulting to distressed companies, even though he was not licensed to do so.

“After gaining the firm’s trust, he took control of their assets and would later pocket the cash intended for creditors,” according to a U.K. news report from 2015. “After snatching the cash, Davies proceeded to spend the stolen money on a life of luxury, purchasing a new upmarket home fitted with a high-tech cinema system and new kitchen.”

Davies disappeared before he was convicted of fraud in 2015. Two years before that, Davies was released from prison after being held in custody for 16 months on suspicion of murdering his new bride in 2004 on their honeymoon in India.

Davies’ former wife Colette Davies, 39, died after falling 80 feet from a viewing point at a steep gorge in the Himachal Pradesh region of India. Mr. Davies was charged with murder and fraud after he attempted to collect GBP 132,000 in her life insurance payout, but British prosecutors ultimately conceded they did not have enough evidence to convict him.

THE SWISS AND UKRAINE CONNECTIONS

While the photos above are similar, there are other clues that suggest the two identities may be the same person. A review of business records tied to Davies’ phony insolvency consulting businesses between 2007 and 2013 provides some additional pointers.

John Clifton Davies’ former listing at the official U.K. business registrar Companies House show his company was registered at the address 26 Dean Forest Way, Broughton, Milton Keynes.

A search on that street address at 4iq.com turns up several interesting results, including a listing for senecaequities.com registered to a John Davies at the email address john888@myswissmail.ch.

A Companies House official record for Seneca Equities puts it at John Davies’ old U.K. address at 26 Dean Forest Way and lists 46-year-old Iryna Davies as a director. “Iryna” is a uniquely Ukrainian spelling of the name Irene (the Russian equivalent is typically “Irina”).

A search on John Clifton Davies and Iryna turned up this 2013 story from The Daily Mirror which says Iryna is John C. Davies’ fourth wife, and that the two were married in 2010.

A review of the Swiss company registrar for The Inside Knowledge GmbH shows an Ihor Hubskyi was named as president of the company. This name is phonetically the same as Igor Gubskyi, a Ukrainian man who was listed in the U.K.’s Companies House records as one of five officers for Seneca Equities along with Iryna Davies.

KrebsOnSecurity sought comment from both the U.K. police district that prosecuted Davies’ case and the U.K.’s National Crime Agency (NCA). Neither wished to comment on the findings. “We can neither confirm nor deny the existence of an investigation or subjects of interest,” a spokesperson for the NCA said.

Planet DebianColin Watson: Porting Launchpad to Python 3: progress report

Launchpad still requires Python 2, which in 2020 is a bit of a problem. Unlike a lot of the rest of 2020, though, there’s good reason to be optimistic about progress.

I’ve been porting Python 2 code to Python 3 on and off for a long time, from back when I was on the Ubuntu Foundations team and maintaining things like the Ubiquity installer. When I moved to Launchpad in 2015 it was certainly on my mind that this was a large body of code still stuck on Python 2. One option would have been to just accept that and leave it as it is, maybe doing more backporting work over time as support for Python 2 fades away. I’ve long been of the opinion that this would doom Launchpad to being unmaintainable in the long run, and since I genuinely love working on Launchpad - I find it an incredibly rewarding project - this wasn’t something I was willing to accept. We’re already seeing some of our important dependencies dropping support for Python 2, which is perfectly reasonable on their terms but which is starting to become a genuine obstacle to delivering important features when we need new features from newer versions of those dependencies. It also looks as though it may be difficult for us to run on Ubuntu 20.04 LTS (we’re currently on 16.04, with an upgrade to 18.04 in progress) as long as we still require Python 2, since we have some system dependencies that 20.04 no longer provides. And then there are exciting new features like type hints and async/await that we’d like to be able to use.

However, until last year there were so many blockers that even considering a port was barely conceivable. What changed in 2019 was sorting out a trifecta of core dependencies. We ported our database layer, Storm. We upgraded to modern versions of our Zope Toolkit dependencies (after contributing various fixes upstream, including some substantial changes to Zope’s test runner that we’d carried as local patches for some years). And we ported our Bazaar code hosting infrastructure to Breezy. With all that in place, a port seemed more of a realistic possibility.

Still, even with this, it was never going to be a matter of just following some standard porting advice and calling it good. Launchpad has almost a million lines of Python code in its main git tree, and around 250 dependencies of which a number are quite Launchpad-specific. In a project that size, not only is following standard porting advice an extremely time-consuming task in its own right, but just about every strange corner case is going to show up somewhere. (Did you know that StringIO.StringIO(None) and io.StringIO(None) do different things even after you account for the native string vs. Unicode text difference? How about the behaviour of .union() on a subclass of frozenset?) Launchpad’s test suite is fortunately extremely thorough, but even just starting up the test suite involves importing most of the data model code, so before you can start taking advantage of it you have to make a large fraction of the codebase be at least syntactically-correct Python 3 code and use only modules that exist in Python 3 while still working in Python 2; in a project this size that turns out to be a large effort on its own, and can be quite risky in places.

Canonical’s product engineering teams work on a six-month cycle, but it just isn’t possible to cram this sort of thing into six months unless you do literally nothing else, and “please can we put all feature development on hold while we run to stand still” is a pretty tough sell to even the most understanding management. Fortunately, we’ve been able to grow the Launchpad team in the last year or so, and so it’s been possible to put “Python 3” on our roadmap on the understanding that we aren’t going to get all the way there in one cycle, while still being able to do other substantial feature development work as well.

So, with all that preamble, what have we done this cycle? We’ve taken a two-pronged approach. From one end, we identified 147 classes that needed to be ported away from some compatibility code in our database layer that was substantially less friendly to Python 3: we’ve ported 38 of those, so there’s clearly a fair bit more to do, but we were able to distribute this work out among the team quite effectively. From the other end, it was clear that it would be very inefficient to do general porting work when any attempt to even run the test suite would run straight into the same crashes in the same order, so I set myself a target of getting the test suite to start up, and started hacking on an enormous git branch that I never expected to try to land directly: instead, I felt free to commit just about anything that looked reasonable and moved things forward even if it was very rough, and every so often went back to tidy things up and cherry-pick individual commits into a form that included some kind of explanation and passed existing tests so that I could propose them for review.

This strategy has been dramatically more successful than anything I’ve tried before at this scale. So far this cycle, considering only Launchpad’s main git tree, we’ve landed 137 Python-3-relevant merge proposals for a total of 39552 lines of git diff output, keeping our existing tests passing along the way and deploying incrementally to production. We have about 27000 more lines of patch at varying degrees of quality to tidy up and merge. Our main development branch is only perhaps 10 or 20 more patches away from the test suite being able to start up, at which point we’ll be able to get a buildbot running so that multiple developers can work on this much more easily and see the effect of their work. With the full unlanded patch stack, about 75% of the test suite passes on Python 3! This still leaves a long tail of several thousand tests to figure out and fix, but it’s a much more incrementally-tractable kind of problem than where we started.

Finally: the funniest (to me) bug I’ve encountered in this effort was the one I encountered in the test runner and fixed in zopefoundation/zope.testrunner#106: IDs of failing tests were written to a pipe, so if you have a test suite that’s large enough and broken enough then eventually that pipe would reach its capacity and your test runner would just give up and hang. Pretty annoying when it meant an overnight test run didn’t give useful results, but also eloquent commentary of sorts.

Worse Than FailureError'd: Where to go, Next?

"In this screenshot, 'Lyckades' means 'Succeeded' and the buttons say 'Try again' and 'Cancel'. There is no 'Next' button," wrote Martin W.

 

"I have been meaning to send a card, but I just wasn't planning on using PHP's eval() to send it," Andrew wrote.

 

Martyn R. writes, "I was trying to connect to PA VPN and it seems they think that downgrading my client software will help."

 

"What the heck, Doordash? I was expecting a little more variety from you guys...but, to be honest, I gotta wonder what 'null' flavored cheesecake is like," Joshua M. writes.

 

Nicolas wrote, "NaN exclusive posts? I'm already on the inside loop. After all, I love my grandma!"

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianReproducible Builds: ARDC sponsors the Reproducible Builds project

The Reproducible Builds project is pleased to announce a donation from Amateur Radio Digital Communications (ARDC) in support of its goals. ARDC’s contribution will propel the Reproducible Builds project’s efforts in ensuring the future health, security and sustainability of our increasingly digital society.

About Amateur Radio Digital Communications (ARDC)

Amateur Radio Digital Communications (ARDC) is a non-profit that was formed to further research and experimentation with digital communications using radio, with a goal of advancing the state of the art of amateur radio and to educate radio operators in these techniques.

It does this by managing the allocation of network resources, encouraging research and experimentation with networking protocols and equipment, publishing technical articles and number of other activities to promote the public good of amateur radio and other related fields. ARDC has recently begun to contribute funding to organisations, groups, individuals and projects towards these and related goals, and their grant to the Reproducible Builds project is part of this new initiative.

Amateur radio is an entirely volunteer activity performed by knowledgeable hobbyists who have proven their ability by passing the appropriate government examinations. No remuneration is permitted. “Ham radio,” as it is also known, has proven its value in advancements of the state of the communications arts, as well as in public service during disasters and in times of emergency.

For more information about ARDC, please see their website at ampr.org.

About the Reproducible Builds project

One of the original promises of open source software was that peer review would result in greater end-user security and stability of our digital ecosystem. However, although it is theoretically possible to inspect and build the original source code in order to avoid maliciously-inserted flaws, almost all software today is distributed in prepackaged form.

This disconnect allows third-parties to compromise systems by injecting code into seemingly secure software during the build process, as well as by manipulating copies distributed from ‘app stores’ and other package repositories.

In order to address this, ‘Reproducible builds’ are a set of software development practices, ideas and tools that create an independently-verifiable path from the original source code, all the way to what is actually running on our machines. Reproducible builds can reveal the injection of backdoors introduced by the hacking of developers’ own computers, build servers and package repositories, but can also expose where volunteers or companies have been coerced into making changes via blackmail, government order, and so on.

A world without reproducible builds is a world where our digital infrastructure cannot be trusted and where online communities are slower to grow, collaborate less and are increasingly fragile. Without reproducible builds, we leave space for greater encroachments on our liberties both by individuals as well as powerful, unaccountable actors such as governments, large corporations and autocratic regimes.

The Reproducible Builds project began as a project within the Debian community, but is now working with many crucial and well-known free software projects such as Coreboot, openSUSE, OpenWrt, Tails, GNU Guix, Arch Linux, Tor, and many others. It is now an entirely Linux distribution independent effort and serves as the central ‘clearing house’ for all issues related to securing build systems and software supply chains of all kinds.

For more about the Reproducible Builds project, please see their website at reproducible-builds.org.


If you are interested in ensuring the ongoing security of the software that underpins our civilisation, and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

,

Krebs on SecurityMicrosoft: Attackers Exploiting ‘ZeroLogon’ Windows Flaw

Microsoft warned on Wednesday that malicious hackers are exploiting a particularly dangerous flaw in Windows Server systems that could be used to give attackers the keys to the kingdom inside a vulnerable corporate network. Microsoft’s warning comes just days after the U.S. Department of Homeland Security issued an emergency directive instructing all federal agencies to patch the vulnerability by Sept. 21 at the latest.

DHS’s Cybersecurity and Infrastructure Agency (CISA) said in the directive that it expected imminent exploitation of the flaw — CVE-2020-1472 and dubbed “ZeroLogon” — because exploit code which can be used to take advantage of it was circulating online.

Last night, Microsoft’s Security Intelligence unit tweeted that the company is “tracking threat actor activity using exploits for the CVE-2020-1472 Netlogon vulnerability.”

“We have observed attacks where public exploits have been incorporated into attacker playbooks,” Microsoft said. “We strongly recommend customers to immediately apply security updates.”

Microsoft released a patch for the vulnerability in August, but it is not uncommon for businesses to delay deploying updates for days or weeks while testing to ensure the fixes do not interfere with or disrupt specific applications and software.

CVE-2020-1472 earned Microsoft’s most-dire “critical” severity rating, meaning attackers can exploit it with little or no help from users. The flaw is present in most supported versions of Windows Server, from Server 2008 through Server 2019.

The vulnerability could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

Scott Caveza, research engineering manager at security firm Tenable, said several samples of malicious .NET executables with the filename ‘SharpZeroLogon.exe’ have been uploaded to VirusTotal, a service owned by Google that scans suspicious files against dozens of antivirus products.

“Given the flaw is easily exploitable and would allow an attacker to completely take over a Windows domain, it should come as no surprise that we’re seeing attacks in the wild,” Caveza said. “Administrators should prioritize patching this flaw as soon as possible. Based on the rapid speed of exploitation already, we anticipate this flaw will be a popular choice amongst attackers and integrated into malicious campaigns.”

Kevin RuddWall Street Journal: China Backslides on Economic Reform

Published in the Wall Street Journal on 24 September 2020

 

China is the only major world economy reporting any economic growth today. It went first into Covid-19 and was first out, grinding out 3.2% growth in the most recent quarter while the U.S. shrank 9.5% and other advanced economies endured double-digit declines. High-tech monitoring, comprehensive testing and aggressive top-down containment measures enabled China to get the virus under control while others struggled. The Middle Kingdom may even deliver a modest year-over-year economic expansion in 2020.

This rebound is real, but behind the short-term numbers the economic restart is dubious. China’s growth spurt isn’t the beginning of a robust recovery but an uneven bounce fueled by infrastructure construction. Second-quarter data showed the same imbalance other nations are wrestling with: Investment contributed 5 percentage points to growth, while consumption fell, subtracting 2.3 points.

Since 2017, the China Dashboard, a joint project of Rhodium Group and the Asia Society Policy Institute, has tracked economic policy in China closely for signs of progress. Despite repeated commitments from Chinese authorities to open up and address the country’s overreliance on debt, the China Dashboard has observed delayed attempts and even backtracking on reforms. The Covid-19 outbreak offered an opportunity for Beijing to shift course and deliver on market reforms. Signals from leaders this spring hinted at fixing defunct market mechanisms. But notably, the long list of reforms promised in May was almost the same as previous lists—such as separating capital management from business management at state firms and opening up to foreign investment while increasing the “quality” of outbound investment—which were adopted by the Third Plenum in 2013. In other words, promised recent reforms didn’t happen, and nothing in the new pronouncements explains why or how this time will be different.

An honest look at the forces behind China’s growth this year shows a doubling down on state-managed solutions, not real reform. State-owned entities, or SOEs, drove China’s investment-led recovery. In the first half of 2020, according to China’s National Bureau of Statistics, fixed-asset investment grew by 2.1% among SOEs and decreased by 7.3% in the private sector. Finished product inventory for domestic private firms rose sharply in the same period—a sign of sales difficulty—while SOE inventory decreased slightly, showing the uneven nature of China’s growth.

Perhaps the most significant demonstration of mistrust in markets is the “internal circulation” program first floated by President Xi Jinping in May. On the surface, this new initiative is supposed to expand domestic demand to complement, but not replace, external demand. But Beijing is trying to boost home consumption by making it a political priority. With household demand still shrinking, expect more subsidies to producers and other government interventions, rather than measures that empower buyers. Dictating to markets and decreeing that consumption will rise aren’t the hallmarks of an advanced economy.

The pandemic has forced every country to put short-term stability above future concerns, but no other country has looming burdens like China’s. In June the State Council ordered banks to “give up” 1.5 trillion yuan (around $220 billion) in profit, the only lever of control authorities have to reduce debt costs and support growth. In August China’s economic planning agency ordered six banks to set aside a quota to fund infrastructure construction with long-term loans at below-market rates. These temporary solutions threaten the financial stability of a system that is already overleveraged, pointing to more bank defaults and restructurings ahead.

China also faces mounting costs abroad. By pumping up production over the past six months, as domestic demand stagnated, Beijing has ballooned its trade surplus, fueling an international backlash against state-driven capitalism that goes far beyond Washington. The U.S. has pulled the plug on some channels for immigration, financial flows and technology engagement. Without naming China, a European Commission policy paper in June took aim at “subsidies granted by non-EU governments to companies in the EU.” Governments and industry groups in Germany, the Netherlands, France and Italy are also pushing China to change.

Misgivings about Beijing’s direction are evident even among foreign firms that have invested trillions of dollars in China in recent decades. A July 2020 UBS Evidence Lab survey of more than 1,000 chief financial officers across sectors in the U.S., China and North Asia found that 75% of respondents are considering moving some production out of China or have already done so. Nearly half the American executives with plans to leave said they would relocate more than 60% of their firms’ China-based production. Earlier this year Beijing floated economic reforms intended to forestall an exodus of foreign companies, but nothing has come of it.

For years, the world has watched and waited for China to become more like a free-market economy, thereby reducing American security concerns. At a time of profound stress world-wide, the multiple gauges of reform we have been monitoring through the China Dashboard point in the opposite direction. China’s economic norms are diverging from, rather than converging with, the West’s. Long-promised changes detailed at the beginning of the Xi era haven’t materialized.

Though Beijing talks about “market allocation” efficiency, it isn’t guided by what mainstream economists would call market principles. The Chinese economy is instead a system of state capitalism in which the arbiter is an uncontestable political authority. That may or may not work for China, but it isn’t what liberal democracies thought they would get when they invited China to take a leading role in the world economy.

The post Wall Street Journal: China Backslides on Economic Reform appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Generic Comment

To my mind, code comments are important to explain why the code what it does, not so much what it does. Ideally, the what is clear enough from the code that you don’t have to. Today, we have no code, but we have some comments.

Chris recently was reviewing some C# code from 2016, and found a little conversation in the comments, which may or may not explain whats or whys. Line numbers included for, ahem context.

4550: //A bit funky, but entirely understandable: Something that is a C# generic on the storage side gets
4551: //represented on the client side as an array. Why? A C# generic is rather specific, i.e., Java
4552: //doesn't have, for example, a Generic List class. So we're messing with arrays. No biggie.

Now, honestly, I’m more confused than I probably would have been just from the code. Presumably as we’re sending things to a client, we’re going to serialize it to an intermediate representation, so like, sure, arrays. The comment probably tells me why, but it’s hardly a necessary explanation here. And what does Java have to do with anything? And also, Java absolutely does support generics, so even if the Java trivia were relevant, it’s not accurate.

I’m not the only one who had some… questions. The comment continues:

4553: //
4554: //WTF does that have to do with anything? Who wrote this inane, stupid comment, 
4555: //but decided not to put comments on anything useful?

Not to sound judgmental, but if you’re having flamewars in your code comments, you may not be on a healthy, well-functioning team.

Then again, if this is someplace in the middle of your file, and you’re on line 4550, you probably have some other problems going on.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityGovt. Services Firm Tyler Technologies Hit in Apparent Ransomware Attack

Tyler Technologies, a Texas-based company that bills itself as the largest provider of software and technology services to the United States public sector, is battling a network intrusion that has disrupted its operations. The company declined to discuss the exact cause of the disruption, but their response so far is straight out of the playbook for responding to ransomware incidents.

Plano, Texas-based Tyler Technologies [NYSE:TYL] has some 5,300 employees and brought in revenues of more than $1 billion in 2019. It sells a broad range of services to state and local governments, including appraisal and tax software, integrated software for courts and justice agencies, enterprise financial software systems, public safety software, records/document management software solutions and transportation software solutions for schools.

Earlier today, the normal content on tylertech.com was replaced with a notice saying the site was offline. In a statement provided to KrebsOnSecurity after the markets closed central time, Tyler Tech said early this morning the company became aware that an unauthorized intruder had gained access to its phone and information technology systems.

“Upon discovery and out of an abundance of caution, we shut down points of access to external systems and immediately began investigating and remediating the problem,” Tyler’s Chief Information Officer Matt Bieri said. “We have since engaged outside IT security and forensics experts to conduct a detailed review and help us securely restore affected equipment. We are implementing enhanced monitoring systems, and we have notified law enforcement.”

“At this time and based on the evidence available to us to-date, all indications are that the impact of this incident is limited to our internal network and phone systems,” their statement continues. “We currently have no reason to believe that any client data, client servers, or hosted systems were affected.”

While it may be comforting to hear that last bit, the reality is that it is still early in the company’s investigation. Also, ransomware has moved well past just holding a victim firm’s IT systems hostage in exchange for an extortion payment: These days, ransomware purveyors will offload as much personal and financial data that they can before unleashing their malware, and then often demand a second ransom payment in exchange for a promise to delete the stolen information or to refrain from publishing it online.

Tyler Technologies declined to say how the intrusion is affecting its customers. But several readers who work in IT roles at local government systems that rely on Tyler Tech said the outage had disrupted the ability of people to pay their water bills or court payments.

“Tyler has access to a lot of these servers in cities and counties for remote support, so it was very thoughtful of them to keep everyone in the dark and possibly exposed if the attackers made off with remote support credentials while waiting for the stock market to close,” said one reader who asked to remain anonymous.

Depending on how long it takes for Tyler to recover from this incident, it could have a broad impact on the ability of many states and localities to process payments for services or provide various government resources online.

Tyler Tech has pivoted on the threat of ransomware as a selling point for many of its services, using its presence on social media to promote ransomware survival guides and incident response checklists. With any luck, the company was following some of its own advice and will weather this storm quickly.

Update, Sept. 24, 6:00 p.m. ET: Tyler said in an updated statement on its website that a review of its logs, monitoring, traffic reports and cases related to utility and court payments revealed no outages with those systems. However, several sources interviewed for this story who work in tech roles at local governments which rely on Tyler Tech said they proactively severed their connections to Tyler Tech systems after learning about the intrusion. This is a fairly typical response for companies that outsource payment transactions to a third party when and that third party ends up experiencing a ransomware attack, but the end result (citizens unable to make payments) is the same.

Update, 11:49 p.m. ET: Tyler is now encouraging all customers to change any passwords for any remote network access for Tyler staff, after receiving reports from some customers about suspicious logins. From a statement Tyler sent to customers:

“We apologize for the late-night communications, but we wanted to pass along important information as soon as possible. We recently learned that two clients have report suspicious logins to their systems using Tyler credentials. Although we are not aware of any malicious activity on client systems and we have not been able to investigate or determine the details regarding these logins, we wanted to let you know immediately so that you can take action to protect your systems.”

Cory DoctorowAnnouncing the Attack Surface tour

It’s been 12 years since I went on my first book tour and in the years since, I’ve met and spoken with tens of thousands of readers in hundreds of cities on five continents in support of more than a dozen books.

Now I’ve got another major book coming out: ATTACK SURFACE.

How do you tour a book during a pandemic? I think we’re still figuring that out. I’ll tell you one thing, I won’t be leaving Los Angeles this time around. Instead, my US publisher, Tor Books, has set up eight remote “Attack Surface Lectures.”

https://read.macmillan.com/torforge/cory-doctorow-virtual-lecture-series/

Each event has a different theme and different guest-hosts/co-discussants, chosen both for their expertise and their ability to discuss their subjects in ways that are fascinating and engaging.

ATTACK SURFACE is the third Little Brother book, a standalone book for adults.

It stars Masha, a young woman who is finally reckoning with the moral character of the work she’s done, developing surveillance tools to fight Iraqi insurgents and ex-Soviet democracy activists.

Masha has struggled with her work for years, compartmentalizing her qualms, rationalizing her way into worse situations.

She goes home to San Francisco and discovers her best friend, a BLM activist, is being targeted by the surveillance weapons Masha herself invented.

What follows is a Little Brother-style technothriller, full of rigorous description and extrapolation on cybersecurity, surveillance and resistance, that illuminates the tale of a tech worker grappling with their own life’s work.

Obviously, this covers a lot of ground, as is reflected in the eight nights of talks we’re announcing today:

I. Politics & Protest, Oct 13, with Eva Galperin and Ron Deibert, hosted by The Strand Bookstore

II. Cross-Medium SciFi, Oct 14, with Amber Benson and John Rogers, hosted by Brookline Booksmith

III. ​​Intersectionality: Race, Surveillance, and Tech and Its History, Oct 15, with Malkia Cyril and Meredith Whittaker, hosted by Booksmith

IV. SciFi Genre, Oct 16, with Sarah Gailey and Chuck Wendig, hosted by Fountain Books

V. Cyberpunk and Post-Cyberpunk, Oct 19, with Bruce Sterling and Christopher Brown, hosted by Andeersons Bookshop

VI. Tech in SciFi, Oct 20, with Ken Liu and Annalee Newitz, hosted by Interabang

VII. Little Revolutions, Oct 21, with Tochi Onyebuchi and Bethany C Morrow, hosted by Skylight Books

VIII. OpSec & Personal Cyber-Security: How Can You Be Safe?, Oct 22, with Runa Sandvik and Window Snyder, hosted by Third Place Books

Some of the events come with either a hardcover and a signed bookplate, or, with some stores, actual signed books.

(those stores’ stock is being shipped to my house, and I’m signing after each event and mailing out from here)

(yes, really)

I’ve never done anything like this and I’d be lying if I said I wasn’t nervous about it. Book tours are crazy marathons – I did 35 cities in 3 countries in 45 days for Walkaway a couple years ago – and this is an entirely different kind of thing.

But I’m also (very) excited. Revisiting Little Brother after seven years is quite an experience. ATTACK SURFACE – a book about uprisings, police-state tactics, and the digital tools as oppressors and liberators – is (unfortunately) very timely.

Having an excuse to talk about this book and its themes with you all – and with so many distinguished and brilliant guests – is going to keep me sane next month. I really hope you can make it.

Cory DoctorowPoesy the Monster Slayer

POESY THE MONSTER SLAYER is my first-ever picture book, illustrated by Matt Rockefeller and published by Firstsecond. It’s an epic tale of toy-hacking, bedtime-avoidance and monster-slaying.

https://us.macmillan.com/books/9781626723627

The book’s publication was attended by a superb and glowing review from Kirkus:

“The lights are out, and the battle begins. She knows the monsters are coming, and she has a plan. First, the werewolf appears. No problem. Poesy knows the tools to get rid of him: silver (her tiara) and light. She wins, of course, but the ruckus causes a ‘cross’ Daddy to appear at her door, telling her to stop playing with toys and go back to bed. She dutifully lets him tuck her back in. But on the next page, her eyes are open. ‘Daddy was scared of monsters. Let DADDY stay in bed.’ Poesy keeps fighting the monsters that keep appearing out of the shadows, fearlessly and with all the right tools, to the growing consternation of her parents, a Black-appearing woman and a White-appearing man, who are oblivious to the monsters and clearly fed up and exhausted but used to this routine. Poesy is irresistible with her brave, two-sided personality. Her foes don’t stand a chance (and neither do her parents). Rockefeller’s gently colored cartoon art enhances her bravery with creepily drawn night creatures and lively, expressive faces.

“This nighttime mischief is not for the faint of heart. (Picture book. 5-8)”

Kirkus joins Publishers Weekly in its praise: “Strikes a gently edgy tone, and his blow-by-blow account races to its closing spread: of two tired parents who resemble yet another monster.”

Sociological ImagesIs Knowing Half the Battle?

We seem to have been struggling with science for the past few…well…decades. The CDC just updated what we know about COVID-19 in the air, misinformation about trendy “wellness products” abounds, and then there’s the whole climate crisis.

This is an interesting pattern because many public science advocates put a lot of work into convincing us that knowing more science is the route to a more fulfilling life. Icons like Carl Sagan and Neil deGrasse Tyson, as well as modern secular movements, talk about the sense of profound wonder that comes along with learning about the world. Even GI Joe PSAs told us that knowing was half the battle.

The problem is that we can be too quick to think that knowing more will automatically make us better at addressing social problems. That claim is based on two assumptions: one, that learning things feels good and motivates us to action, and two, that knowing more about a topic makes people more likely to appreciate and respect that topic. Both can be true, but they are not always true.

The first is a little hasty. Sure, learning can feel good, but research on teaching and learning shows that it doesn’t always feel good, and I think we often risk losing students’ interest because they assume that if a topic is a struggle, they are not meant to be studying it.

The second is often wrong, because having more information does not always empower us to make better choices. Research shows us that knowing more about a topic can fuel all kinds of other biases, and partisan identification is increasingly linked with with attitudes toward science.

To see this in action, I took a look at some survey data collected by the Pew Research Center in 2014. The survey had seven questions checking attitudes about science – like whether people kept up with science or felt positively about it – and six questions checking basic knowledge about things like lasers and red blood cells. I totaled up these items into separate scales so that each person has a score for how much they knew and how positively or negatively they thought about science in general. These scales are standardized, so people with average scores are closer to zero. Plotting out these scores shows us a really interesting null finding documented by other research – there isn’t a strong relationship between knowing more and feeling better about science.

The purple lines mark average scores in each scale, and the relationship between science knowledge and science attitudes is fairly flat.

Here, both people who are a full standard deviation above the mean and multiple standard deviations below the mean on their knowledge score still hold pretty average attitudes about science. We might expect an upward sloping line, where more knowledge associates with more positive attitudes, but we don’t see that. Instead, attitudes about science, whether positive or negative, get more diffuse among people who get fewer answers correct. The higher the knowledge, the more tightly attitudes cluster around average.

This is an important point that bears repeating for people who want to change public policy or national debate on any science-based issue. It is helpful to inform people about these serious issues, but shifting their attitudes is not simply a matter of adding more information. To really change minds, we have to do the work to put that information into conversation with other meaning systems, emotions, and moral assumptions.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianVincent Fourmond: Tutorial: analyze Km data of CODHs

This is the first post of a series in which we will provide the readers with simple tutorial approaches to reproduce the data analysis of some of our published papers. All our data analysis is performed using QSoas. Today, we will show you how to analyze the experiments we used to characterize the behaviour of an enzyme, the Nickel-Iron CO dehydrogenase IV from Carboxytothermus hydrogenoformans. The experiments we analyze here are described in much more details in the original publication, Domnik et al, Angewandte Chemie, 2017. The only things you need to know for now are the following:
  • the data is the evolution of the current over time given by the absorbed enzyme;
  • at a given point, a certain amount of CO (50 µM) is injected, and its concentration decreases exponentially over time;
  • the enzyme has a Michaelis-Menten behaviour with respect to CO.
This means that we expect a response of the type: $$i(t) = \frac{i_m}{1 + \frac{K_m}{[\mathrm{CO}](t)}}$$ in which $$[\mathrm{CO}](t) = \begin{cases} 0, & \text{for } t < t_0 \\ C_0 \exp \frac{t_0 - t}{\tau}, & \text{for }t\geq t_0 %> \end{cases}$$

To begin this tutorial, first download the files from the github repository (direct links: data, parameter file and ruby script). Start QSoas, go to the directory where you saved the files, load the data file, and remove spikes in the data using the following commands:

QSoas> cd
QSoas> l Km-CODH-IV.dat
QSoas> R
First fit
Then, to fit the above equation to the data, the simplest is to take advantage of the time-dependent parameters features of QSoas. Run simply:
QSoas> fit-arb im/(1+km/s) /with=s:1,exp
This simply launches the fit interface to fit the exact equations above. The im/(1+km/s) is simply the translation of the Michaelis-Menten equation above, and the /with=s:1,exp specifies that s is the result of the sum of 1 exponential like for the definition of above. Then, load the Km-CODH-IV.params parameter files (using the "Parameters.../Load from file" action at the bottom, or the Ctrl+L keyboard shortcut). Your window should now look like this:
To fit the data, just hit the "Fit" button ! (or Ctrl+F).
Including an offset
The fit is not bad, but not perfect. In particular, it is easy to see why: the current predicted by the fit goes to 0 at large times, but the actual current is below 0. We need therefore to include an offset to take this into consideration. Close the fit window, and re-run a fit, but now with this command:
QSoas> fit-arb im/(1+km/s)+io /with=s:1,exp
Notice the +io bit that corresponds to the addition of an offset current. Load again the base parameters, run the fit again... Your fit window show now look like:
See how the offset current is now much better taken into account. Let's talk a bit more about the parameters:
  • im is \(i_m\), the maximal current, around 120 nA (which matches the magnitude of the original current).
  • io is the offset current, around -3nA.
  • km is the \(K_m\), expressed in the same units as s_1, the first "injected" value of s (we used 50 because the injection is 50 µM CO). That means the value of \(K_m\) determined by this fit is around 9 nM ! (but see below).
  • s_t_1 is the time of the injection of CO (it was in the parameter files you loaded).
  • Finally, s_tau_1 is the time constant of departure of CO, noted \(\tau\) in the equations above.
Taking into account mass-transport limitations
However, the fit is still unsatisfactory: the predicted curve fails to reproduce the curvature at the beginning and at the end of the decrease. This is due to issues linked to mass-transport limitations, which are discussed in details in Merrouch et al, Electrochimica Acta, 2017. In short, what you need to do is to close the fit window again, load the transport.rb Ruby file that contains the definition of the itrpt function, and re-launch the fit window using:
QSoas> ruby-run transport.rb
QSoas> fit-arb itrprt(s,km,nFAm,nFAmu)+io /with=s:1,exp
Load again the parameter file... but this time you'll have to play a bit more with the starting parameters for QSoas to find the right values when you fit. Here are some tips:
  • the curve predicted with the current parameters (use "Update" to update the display) should "look like" the data;
  • apart from io, no parameter should be negative;
  • there may be hints about the correct values in the papers...
A successful fit should look like this:
Here you are ! I hope you enjoyed analyzing our data, and that it will help you analyze yours ! Feel free to comment and ask for clarifications.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Kevin RuddABC Radio Melbourne: On the Coalition’s changes to the NBN

E&OE TRANSCRIPT
RADIO INTERVIEW
ABC RADIO MELBOURNE DRIVE
23 SEPTEMBER 2020

Topics: NBN, Energy, Gas, Dan Andrews and Victoria

Rafael Epstein
Welcome, if you’re watching us on facebook live today the Coalition government came forward and said they will fund NBN to your home and not just to your street. 6 million homes at least some businesses as well, at a cost of about well somewhere north of $5 billion. Kevin Rudd is claiming vindication. He is of course, the former prime minister whose government came up with the NBN idea. And he’s president of the Asia Society Policy Institute in New York. At the moment, he is in Queensland, Kevin Rudd, thanks for joining us.

Kevin Rudd
Good to be with you.

Rafael Epstein
They’re doing the same thing, but better and cheaper. They say.

Kevin Rudd
Well, I think in your intro, you said ‘claiming vindication.’ All I’m interested in is the facts here. We laid out a plan for a National Broadband Network back in 2009 to take fiber optic cable to every single premise is house in small business and school in the country. Other than the most remote areas where we rely on a combination of satellite and wireless services, so that was 93% of the country. And it was to be 100 megabits per second, capable of further upgrade. So that’s what we were rolling out. Whistle onto the 2013 election. What did they then do? Partly in response to pressure from Murdoch, who didn’t want a new competition for his failing Fox Entertainment Network on cable by

Rafael Epstein
Disputed by Malcolm Turnbull, but continue? Yes.

Kevin Rudd
Well, that might be disputed, I’m just referring to basically unassailable facts. Murdoch did not want Netflix streamed live to people’s homes, because that would provide direct competition far too early than he wanted for his Fox entertainment cable network, at that stage is only really profitable business in Australia. Anyway, we get to 2013. What did they do they kill the original model, and instead we’re not gonna have fiber optic to the premises and have fiber optic to this thing called the node wherever that was, we still haven’t found ours in our suburbs up in sunshine beach. And that’s where it’s languished ever since. And as a result, when COVID hit people across the country have been screaming out because the inadequacy of their broadband services. That’s the history, however, they may choose to rewrite it.

Rafael Epstein
Well, they do. The essential argument without getting into the technical detail from the government is that you went too hard, too fast. You hadn’t connected nearly enough people by the time of that election, and they needed to go back and do it more slowly. And with the best technology at the time, how do you respond to that?

Kevin Rudd
What a load of codswallop. We launched the thing in 2009, I was actually participating in the initial rollout of cable ceremonies. I remember doing one in northwestern Tasmania, late 09, early 2010. And we had rolled out cable in a number of suburbs and a number of cities and regional towns right across Australia and we deliberately started in regional areas. We did not start in the inner city areas because we wanted to make sure that every Australian household and small business had a go.

Rafael Epstein
Is that what led to the higher cost, Kevin Rudd, is because they did the criticism from them is that yours was too expensive what you were doing at the time.

Kevin Rudd
Well, I make no apology for what it costs to layout high speed broadband in Darwin, or in Bernie in northwestern Tasmania, or in these parts of the world where yes, residences are further apart. But either you had a digital divide in Australia, which separated rich from poor from inner city to outer suburbs, or from capital cities to the country and regions. I grew up in the regions, I happen to believe that people out there are deserving of high speed broadband just as much as anybody else. And they killed this plan in 2013 spent 10s of billions of dollars in a botch job, which resulted in broadband speeds for Australia, among the slowest in the OECD. How they defend that, as a policy achievement, beggars belief.

Rafael Epstein
Doesn’t it makes sense though, to have a demand driven program? The minister was saying, look what we’re gonna do, we’ll roll the fiber down the street, and then the fiber will come into your home only when the customer wants it. Isn’t that a sensible way to build a network?

Kevin Rudd
So why have I and others around the country and members of parliament, government and opposition received screaming petitions from people over the last, you know, four to five years, demanding their broadband services. And in fact, it’s just been virtually impossible for many people to get connected. The truth is this has been an absolute botch up in the conceptual design, that is, substituting fiber optic cable with copper, for that last link, which means that however fast the data has traveled along to the fiber optic node. Once it’s constricted and strangled at the copper intersection point it then slows down to a snail’s pace, which is why everyone listening to your program has been pulling their hair out if they hadn’t been able to be connected to fiber optic themselves.

Rafael Epstein
We don’t know for sure which program would have been more expensive though, do we? I mean they’re massive projects. You only really know the costs when you reach the point in the project? I mean, how do we know yours would have been any cheaper or any better?

Kevin Rudd
We planned ours thoroughly. The first conceptual design contrary to Liberal Party propaganda, was put together over a six month period involving technical committees, presided over by the Secretary of the Commonwealth Treasurery. The Liberal Party propaganda was that the NBN was invented on the back of an envelope or the back of a restaurant menu. That’s utter bullshit. The bottom line is it was well planned, well developed by a specialist committee of experts both technical and financial presided over by the Secretary the Treasury. It was then rolled out and we intend to have it rolled out across the nation by 2018-2019. That was how it was going. We had a battle plan with which to do that. But when it gets cut off at midships by the conservatives with an entirely different delivery model, the amount of waste of taxpayers dollars involved in going for plan Zed rather than Plan A…

Rafael Epstein
So can I can ask you to expand on that Kevin Rudd because you’re accusing them of waste. Why is it a waste? What have they done that you say is a waste?

Kevin Rudd
Let me just give you one example of waste. One example of waste is when you’ve got a massive rollout program underway with multiple contractors already engaged across the country, preparing trenches and laying cable and then with a central fiat from the then Abbott-Turnbull-Morrison government saying stop. What happens is all these projects and all these contractual arrangements get thrown up into the air, many of which I imagine involve penalty payments for loss of work, then you’ve got a whole problem now reconvening a massive rollout program, building up capacity and signing new contracts. Again, at loss to the taxpayer, I would invite the Auditor General to investigate what has actually been lost in quantitative dollar terms through the mismanagement of this project.

Rafael Epstein
You’re listening to Kevin Rudd, of course, formal Prime Minister of Australia regional conceiver of the NBN with his then Communications Minister Stephen Conroy. The phone number is 1300 222 774. Kevin Rudd, the government had a few announcements over the last week or two around energy, especially the announcement yesterday. There is some there’s quite a lot of senses in there in spending on every low emissions technology that is available that might include something like gases or transition fuel, that’s, that’s a decent step forward, isn’t it, even if they don’t have the emissions tag you’d like them to have?

Kevin Rudd
Let me put it to you in these terms. The strategy we had was to give renewable energy initial boost. We did that through the Mandatory Renewable Energy Target, which we legislated. And it worked. Remember, when we were elected to office 4% of Australia’s electricity and energy supply came from renewables. We legislated to 20% by 2020. And guess what it is? Now it’s 21%. The whole idea was to give that whole slew of renewable and energy industries, a starting chance, you know, a critical mass to participate in the national energy market. What they should be now doing, if they’re going to throw taxpayer dollars at a next tranche of energy is throw it open to competitive renewables and see who comes up with the best bid.

Rafael Epstein
They can do that. I mean, they’ve announced funding the arena, which Labor government came up with, but renewable people can bid for that money. There’s nothing stopping them bidding for their money.

Kevin Rudd
Yeah, but the whole point of investing taxpayer dollars in projects of this nature is to advance the renewables cause or to advance, shall we say, zero emissions? You see, the reason the Renewable Energy Agency was established again by the Labor government was to provide a financing mechanism for renewables that is 100% clean energy. Gas projects exist on the basis of their own financial economics out there in the marketplace, and together with coal will ultimately feel the impact of international investment houses incrementally withdrawing financial support from any carbon based fuel that’s happening over time. Cole’s the product leader in terms of people exiting those investments, others will follow.

Rafael Epstein
What do you make of the way the majors handled? The premier here in Victoria, Dan Andrews.

Kevin Rudd
I think what’s the media has comprehensively failed to understand what it’s like to be in the hot seat. You see, I began my career in public life. As a state official, I was Director General of the Queensland Cabinet Office for five years. I’ve got some familiarity with how state governments go, and how they work and how they operate. And against all the measures of what’s possible and not possible. I think Dan Andrews has done a really good job. And what people who criticize him fail the answer…

Rafael Epstein
Even with the hotel’s problem?

Kevin Rudd
Yeah, well, look, you know, it reminds me very much of when I was Prime Minister. And the Liberals then attacked me and held me personally accountable and responsible for the tragic deaths of four young men who were killed in the rollout of the home insulation program, as a consequence of a failure of contractors to adhere to state based occupational health and safety standards. So all I would say is when people start waving the finger at a head of government and saying you Dan Andrews are responsible for the second shift at two o’clock in the morning at hotel facility F, frankly, doesn’t pass the common sense test, I think…

Rafael Epstein
Shouldn’t his health department be in charge of the…

Kevin Rudd
The voters of Victoria and the voters of the nation, I think are common sense people, and they would ask themselves, what would I do in the same situation? Tough call. But I think in overall terms, Andrews has handled it well.

Rafael Epstein
I know you’re a big critic of the Murdoch media. I just wonder if you see. Is there more criticism that comes from a newspaper like the Herald Sun or the Australian towards Dan Andrews, do you think?

Kevin Rudd
Well, I look at the Murdoch rags every day. And they are rags. If you don’t believe they are go to the ABC and look at Iview at the BBC series on the Murdoch dynasty and the manipulation of truth for editorial purposes. Something which I noticed the ABC in Australia is always coy to probe to fear there’ll be too much Murdoch retaliation against the ABC.

Rafael Epstein
I’m happy to probe it but there are some good journalists at the Australian and the Herald Sun aren’t there?

Kevin Rudd
Well, well actually very few of your colleagues are. That’s the truth of it. Tell me this. When was the last time Four Corners did an expose into the abuse of media power in Australia by the murdochs. Do you know the answer to that one is?

Rafael Epstein
I don’t answer for Four Corners.

Kevin Rudd
No Hang on. You’re the ABC…

Rafael Epstein
Yes but you can’t ask me about Four Corners Kevin Rudd, I don’t have nothing to do with it.

Kevin Rudd
The answer is 25 years. So what I’m saying is that your principal investigator program has left it as too hot to handle for quarter of a century. So to answer your question about Murdoch’s handling of state Labor premiers. If I was to look each day at the coverage and the Queensland Courier Mail and the Melbourne Herald Sun, versus the Daily Telegraph in Sydney, where there’s a Liberal Premier, and the Adelaide advertiser with the Liberal Premier, against any measure, it is chalk and cheese. It is 95% attack in Victoria and Queensland. And it’s 95% of defense in NSW.

Rafael Epstein
Don’t you think there’s some significant criticisms produced by the journalists at those newspapers that are worthy of coverage? In Victoria.

Kevin Rudd
When I speak to journalists who have left for example, the Australian newspaper, and who tell me that the reason they have left is because they are sick and tired of the number of times not only has their copy been rewritten, but in fact rebalanced and with often a contrary headline put across the top of it. Frankly, we all know the joke in the Murdoch newspaper empire. And that is the publisher has a view. It’s a deeply conservative worldview. He doesn’t like Labor, governments federal or states that don’t that don’t doth their caps in honor of his political and financial interests. And he seeks therefore to delegitimize and remove them from office. There’s a pattern of this over 10 years.

Rafael Epstein
I’m going to ask you to give a short answer to this one if I can, because it is a new topic that I know you could talk about for half an hour but if I can. Gut feel who’s gonna win the American election?

Kevin Rudd
I think given up lived in the US for the last five years. I think there is sufficient turn off factor in relation to Trump and his handling of Coronavirus that will cause of itself, Biden to win in what will still be a contested election and end up probably under appeal to the United States Senate.

Rafael Epstein
Thanks for your time.

Kevin Rudd
Good to be with you.

The post ABC Radio Melbourne: On the Coalition’s changes to the NBN appeared first on Kevin Rudd.

Planet DebianVincent Fourmond: Define a function with inline Ruby code in QSoas

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

ruby
def my_func(x, vm, km)
  return vm/(1 + km/x)
end
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
ruby
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
end
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.


See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.

Planet DebianVincent Fourmond: Release 2.2 of QSoas

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.

Freely available binary images for QSoas 1.0

In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Worse Than FailureCodeSOD: A Random While

A bit ago, Aurelia shared with us a backwards for loop. Code which wasn’t wrong, but was just… weird. Well, now we’ve got some code which is just plain wrong, in a number of ways.

The goal of the following Java code is to generate some number of random numbers between 1 and 9, and pass them off to a space-separated file.

StringBuffer buffer = new StringBuffer();
long count = 0;
long numResults = GetNumResults();

while (count < numResults)
{
	ArrayList<BigDecimal> numbers = new ArrayList<BigDecimal>();
	while (numbers.size() < 1)
	{
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
		if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
		{
			buffer.append(randLong);
			buffer.append(" ");
			numbers.add(new BigDecimal(randLong));
		}
		System.out.println("Random Integer: " + randInt + ", Long Integer: " + randLong);	
	}
	
	outFile.writeLine(buffer.toString()); 
	buffer = new StringBuffer();
	
	count++;
}

Pretty quickly, we get a sense that something is up, with the while (count < numResults)- this begs to be a for loop. It’s not wrong to while this, but it’s suspicious.

Then, right away, we create an ArrayList<BigDecimal>. There is no reasonable purpose to using a BigDecimal to hold a value between 1 and 9. But the rails don’t really start to come off until we get into the inner loop.

while (numbers.size() < 1)
	{
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
    if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
    …

This loop condition guarantees that we’ll only ever have one element in the list, which means our numbers.contains check doesn’t mean much, does it?

But honestly, that doesn’t hold a candle to the promotion of randInt to randLong, complete with an & 0xffffffffL, which guarantees… well, nothing. It’s completely unnecessary here. We might do that sort of thing when we’re bitshifting and need to mask out for certain bytes, but here it does nothing.

Also note the (randLong != 0) check. Because they use random.nextInt(10), that generates a number in the range 0–9, but we want 1 through 9, so if we draw a zero, we need to re-roll. A simple, and common solution to this would be to do random.nextInt(9) + 1, but at least we now understand the purpose of the while (numbers.size() < 1) loop- we keep trying until we get a non-zero value.

And honestly, I should probably point out that they include a println to make sure that both the int and the long versions match, but how could they not?

Nothing here is necessary. None of this code has to be this way. You don’t need the StringBuffer. You don’t need nested while loops. You don’t need the ArrayList<BigDecimal>, you don’t need the conversion between integer types. You don’t need the debugging println.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRussell Coker: Qemu (KVM) and 9P (Virtfs) Mounts

I’ve tried setting up the Qemu (in this case KVM as it uses the Qemu code in question) 9P/Virtfs filesystem for sharing files to a VM. Here is the Qemu documentation for it [1].

VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=mapped-xattr,id=zz,writeout=immediate,fmode=0600,dmode=0700,mount_tag=zz"
VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=passthrough,id=zz,writeout=immediate,mount_tag=zz"

Above are the 2 configuration snippets I tried on the server side. The first uses mapped xattrs (which means that all files will have the same UID/GID and on the host XATTRs will be used for storing the Unix permissions) and the second uses passthrough which requires KVM to run as root and gives the same permissions on the host as on the VM. The advantages of passthrough are better performance through writing less metadata and having the same permissions in host and VM. The advantages of mapped XATTRs are running KVM/Qemu as non-root and not having a SUID file in the VM imply a SUID file in the host.

Here is the link to Bonnie++ output comparing Ext3 on a KVM block device (stored on a regular file in a BTRFS RAID-1 filesystem on 2 SSDs on the host), a NFS share from the host from the same BTRFS filesystem, and virtfs shares of the same filesystem. The only tests that Ext3 doesn’t win are some of the latency tests, latency is based on the worst-case not the average. I expected Ext3 to win most tests, but didn’t expect it to lose any latency tests.

Here is a link to Bonnie++ output comparing just NFS and Virtfs. It’s obvious that Virtfs compares poorly, giving about half the performance on many tests. Surprisingly the only tests where Virtfs compared well to NFS were the file creation tests which I expected Virtfs with mapped XATTRs to do poorly due to the extra metadata.

Here is a link to Bonnie++ output comparing only Virtfs. The options are mapped XATTRs with default msize, mapped XATTRs with 512k msize (I don’t know if this made a difference, the results are within the range of random differences), and passthrough. There’s an obvious performance benefit in passthrough for the small file tests due to the less metadata overhead, but as creating small files isn’t a bottleneck on most systems a 20% to 30% improvement in that area probably doesn’t matter much. The result from the random seeks test in passthrough is unusual, I’ll have to do more testing on that.

SE Linux

On Virtfs the XATTR used for SE Linux labels is passed through to the host. So every label used in a VM has to be valid on the host and accessible to the context of the KVM/Qemu process. That’s not really an option so you have to use the context mount option. Having the mapped XATTR mode work for SE Linux labels is a necessary feature.

Conclusion

The msize mount option in the VM doesn’t appear to do anything and it doesn’t appear in /proc/mounts, I don’t know if it’s even supported in the kernel I’m using.

The passthrough and mapped XATTR modes give near enough performance that there doesn’t seem to be a benefit of one over the other.

NFS gives significant performance benefits over Virtfs while also using less CPU time in the VM. It has the issue of files named .nfs* hanging around if the VM crashes while programs were using deleted files. It’s also more well known, ask for help with an NFS problem and you are more likely to get advice than when asking for help with a virtfs problem.

Virtfs might be a better option for accessing databases than NFS due to it’s internal operation probably being a better map to Unix filesystem semantics, but running database servers on the host is probably a better choice anyway.

Virtfs generally doesn’t seem to be worth using. I had hoped for performance that was better than NFS but the only benefit I seemed to get was avoiding the .nfs* file issue.

The best options for storage for a KVM/Qemu VM seem to be Ext3 for files that are only used on one VM and for which the size won’t change suddenly or unexpectedly (particularly the root filesystem) and NFS for everything else.

,

Cryptogram Documented Death from a Ransomware Attack

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

Cryptogram On Executive Order 12333

Mark Jaycox has written a long article on the US Executive Order 12333: “No Oversight, No Limits, No Worries: A Primer on Presidential Spying and Executive Order 12,333“:

Abstract: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order signed by President Ronald Reagan that, among other things, establishes an overarching policy framework for the Executive Branch’s spying powers. Although electronic surveillance programs authorized by EO 12333 generally target foreign intelligence from foreign targets, its permissive targeting standards allow for the substantial collection of Americans’ communications containing little to no foreign intelligence value. This fact alone necessitates closer inspection.

This working draft conducts such an inspection by collecting and coalescing the various declassifications, disclosures, legislative investigations, and news reports concerning EO 12333 electronic surveillance programs in order to provide a better understanding of how the Executive Branch implements the order and the surveillance programs it authorizes. The Article pays particular attention to EO 12333’s designation of the National Security Agency as primarily responsible for conducting signals intelligence, which includes the installation of malware, the analysis of internet traffic traversing the telecommunications backbone, the hacking of U.S.-based companies like Yahoo and Google, and the analysis of Americans’ communications, contact lists, text messages, geolocation data, and other information.

After exploring the electronic surveillance programs authorized by EO 12333, this Article proposes reforms to the existing policy framework, including narrowing the aperture of authorized surveillance, increasing privacy standards for the retention of data, and requiring greater transparency and accountability.

Cryptogram Iranian Government Hacking Android

The New York Times wrote about a still-unreleased report from Chckpoint and the Miaan Group:

The reports, which were reviewed by The New York Times in advance of their release, say that the hackers have successfully infiltrated what were thought to be secure mobile phones and computers belonging to the targets, overcoming obstacles created by encrypted applications such as Telegram and, according to Miaan, even gaining access to information on WhatsApp. Both are popular messaging tools in Iran. The hackers also have created malware disguised as Android applications, the reports said.

It looks like the standard technique of getting the victim to open a document or application.

LongNowSleeping Beauties of Prehistory and the Present Day

Changmiania liaoningensis, buried while sleeping by a prehistoric volcano. Image Source.

Although the sensitive can feel it in all seasons, Autumn seems to thin the veil between the living and the dead. Writing from the dying cusp of summer and the longer bardo marking humankind’s uneasy passage into a new world age (a transit paradoxically defined by floating signifiers and eroded, fluid categories), it seems right to constellate a set of sleeping beauties, both extant and extinct, recently discovered and newly understood. Much like the “sleeping beauties” of forgotten scientific research, as described by Sidney Redner in his 02005 Physics Today paper and elaborated on by David Byrne’s Long Now Seminar, these finds provide an opportunity to contemplate time’s cycles — and the difference between the dead, and merely dormant.

We start 125 million years ago in the unbelievably fossiliferous Liaoning Province of China, one of the world’s finest lagerstätten (an area of unusually rich floral or faunal deposits — such as Canada’s famous Burgess Shale, which captured the transition into the first bloom of complex, hard-shelled, eye-bearing life; or the Solnhofen Limestone in Germany, from which flew the “Urvogel” feathered dinosaur Archaeopteryx, one of the most significant fossil finds in scientific history). Liaoning’s Lujiatan Beds just offered up a pair of perfectly-preserved herbivorous small dinosaurs, named Changmiania or “Sleeping Beauty” for how they were discovered buried in repose within their burrows by what was apparently volcanic ash, a kind of prehistoric Pompeii:

Changmiania in its eternal repose. Image Source.

There’s something especially poignant about flash-frozen remains that render ancient life in its sweet, quiet moments — a challenge to the reigning iconography of the Dawn Ages with their battling giants and their bloodied teeth. Like the Lovers of Valdaro, or the family of Pinacosaurus buried together huddling up against a sandstorm, Changmainia makes the alien past familiar and reminds us of the continuity of life through all its transmutations.

Similarly precious is the new discovery of a Titanosaurid (long-necked dinosaur) embryo from Late Cretaceous Patagonia. These creatures laid the largest eggs known in natural history — a requisite to house what would become some of the biggest animals to ever walk the land. Even so, their contents were so small and delicate it is a marvelous surprise to find the face of this pre-natal “Littlefoot” in such great shape, preserving what looks like an “egg tooth”-bearing structure that would have helped it break free:

The face of a baby dinosaur brings the ancient and brand new into stereoscopic focus. Image Source.

From ancient Argentina to the present day, we move from dinosaurs caught sleeping by fossilization to the “lizard popsicles” of modern reptiles who have managed to adapt to freezing night-time temperatures in their alpine environment. Legends tell of these outliers in the genus Liolaemus walking on the Perito Moreno glacier, a very un-lizard-like haunt; they’re regularly studied well above 13,000 feet, where they may be using an internal form of anti-freeze to live through blistering night-time temperatures. The Andes, young by montane standards, offer a heterogeneous environment that may function like a “species pump,” making Liolaemus one of the most diverse lizard genuses; 272 distinct varieties have been described, some of which give live birth to help their young survive where eggs would just freeze solid.

Liolaemus in half-popsicle mode. Image Source.

Even further south and further back in time, mammal-like reptile Lystrosaurus hibernated in Antarctica 250 million years ago — a discovery made when examining the pulsing growth captured in the records of ringed bone in its extraordinary tusks, much like the growth rings of a redwood:

Lystrosaurus tusks looking remarkably like a redwood tree cross-section. Image Source.

This prehistoric beast, however, lived in a world predating woody trees. Toothless, beaked, and tusked, its positively foreign face nonetheless slept through winter just like modern bears and turtles…which might be why it managed to endure the unimaginable hardships of the Permo-Triassic extinction, which wiped out even more life than the meteor that killed the dinosaurs 135 million years later. Slowing down enough to imitate the dead appears to be, poetically, a strategy for dodging draft into their ranks. And likely living on cuisine like roots and tubers during long Antarctic nights, it may have thrived both in and on the underworld. Lystrosaurus seems to have weathered the Great Dying by playing dead and preying on the kinds of flora that could also play dead through a crisis on the surface.

Lystrosaurus in the woodless landscape of the Triassic. Image Source.

And while we’re on the subject of the blurry boundary between the worlds of life and death, Yale University researchers recently announced they managed to restore activity in some parts of a pig’s brain four hours after death. In the 18th Century when the first proto-CPR resuscitation methods were invented, humans started adding horns to coffins so the not-quite-dead could call for help upon awakening, if necessary; perhaps we’re due for more of this, now that scientists have managed to turn certain regions of a dead brain back on just by simulating blood flow with a special formula called “BrainEx.”

Revived cells in a dead pig’s brain. Image Source.

At no point did the team observe coordinated patterns of electrical activity like those now correlated with awareness, but the research may deliver new techniques for studying an intact mammal brain that lead to innovations in brain injury repair. Discoveries like these suggest that life and death, once thought a binary, is a continuum instead — a slope more shallow every year.

If life and death are ultimately separated only by the paces at which processes at which the many layers of biology align, the future seems like it will be a twilight zone: a weird and wide slow Styx akin to Arthur C. Clarke’s 3001: The Final Odyssey, rife with de-extincted mammoths and revived celebrities of history, digital uploads and doubles and uncanny androids, cryogenic mummies in space ark sarcophagi — and more than a few hibernating Luddites waiting for a sign that it is safe to re-emerge.

Rondam RamblingsCan facts be racist?

Here's a fact:[D]ifferences in home and neighborhood quality do not fully explain the devaluation of homes in black neighborhoods. Homes of similar quality in neighborhoods with similar amenities are worth 23 percent less ($48,000 per home on average, amounting to $156 billion in cumulative losses) in majority black neighborhoods, compared to those with very few or no black residents(And

Planet DebianSteve Kemp: Using a FORTH-like language for something useful

So my previous post was all about implementing a simple FORTH-like language. Of course the obvious question is then "What do you do with it"?

So I present one possible use - turtle-graphics:

\ Draw a square of the given length/width
: square
  dup dup dup dup
  4 0 do
    forward
    90 turn
  loop
;

\ pen down
1 pen

\ move to the given pixel
100 100 move

\ draw a square of width 50 pixels
50 square

\ save the result (png + gif)
save

Exciting times!

Worse Than FailureCodeSOD: A Cutt Above

We just discussed ViewState last week, and that may have inspired Russell F to share with us this little snippet.

private ConcurrentQueue<AppointmentCuttOff> lstAppointmentCuttOff { get { object o = ViewState["lstAppointmentCuttOff"]; if (o == null) return null; else return (ConcurrentQueue<AppointmentCuttOff>)o; } set { ViewState["lstAppointmentCuttOff"] = value; } }

This pattern is used for pretty much all of the ViewState data that this code interacts with, and if you look at the null check, you can see that it's unnecessary. Our code checks for a null, and if we have one… returns null. The entire get block could just be: return (ConcurrentQueue<AppointmentCuttOff>)ViewState["lstAppointmentCuttOff"]

The bigger glitch here is the data-type. While there are a queue of appointments, that queue is never accessed across threads, so there's no need for a threadsafe ConcurrentQueue.

But I really love the name of the variable we store in ViewState. We have Hungarian notation, which calls it a lst, which isn't technically correct, though it is iterable, so maybe that's what they meant, but if the point of Hungarian notation is to make the code more clear, this isn't helping.

But what I really love is that these are CuttOffs, which just sounds like some retail brand attempting to sell uncomfortably short denim. It'll be next year's summer trend, mark my words!

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Unconquerable Sun

Review: Unconquerable Sun, by Kate Elliott

Series: Sun Chronicles #1
Publisher: Tor
Copyright: 2020
ISBN: 1-250-19725-2
Format: Kindle
Pages: 526

Sun is the daughter and heir of the mercurial Queen-Marshal Eirene, ruler of the Republic of Chaonia. Chaonia, thanks to Eirene and her ancestors, has carved out a fiercely independent position between the Yele League and the Phene Empire. Sun's father, Prince João, is one of Eirene's three consorts, all chosen for political alliances to shore up that fragile position. João is Gatoi, a civilization of feared fighters and supposed barbarians from outside Chaonia who normally ally with the Phene, which complicates Sun's position as heir. Sun attempts to compensate for that by winning battles for the Republic, following in the martial footsteps of her mother.

The publisher's summary of this book is not great (I'm a huge fan of Princess Leia, but that is... not the analogy that comes to mind), so let me try to help. This is gender-swapped Alexander the Great in space. However, it is gender-swapped Alexander the Great in space with her Companions, which means the DNA of this novel is half space opera and half heist story (without, to be clear, an actual heist, although there are some heist-like maneuvers). It's also worth mentioning that Sun, like Alexander, is not heterosexual.

The other critical thing to know before reading, mostly because it will get you through the rather painful start, is that the most interesting viewpoint character in this book is not Sun, the Alexander analogue. It's Persephone, who isn't introduced until chapter seven.

Significant disclaimer up front: I got a reasonably typical US grade school history of Alexander the Great, which means I was taught that he succeeded his father, conquered a whole swath of the middle of the Eurasian land mass at a very young age, and then died and left his empire to his four generals who promptly divided it into four uninteresting empires that no one's ever heard of, and that's why Rome is more important than Greece. (I put in that last bit to troll one specific person.)

I am therefore not the person to judge the parallels between this story and known history, or to notice any damage done to Greek pride, or to pick up on elements that might cause someone with a better grasp of that history to break out in hives. I did enough research to know that one scene in this book is lifted directly out of Alexander's life, but I'm not sure how closely the other parallels track. Yele is probably southern Greece and Phene is probably Persia, but I'm not certain even of that, and some of the details don't line up. If I had to hazard a guess, I'd say that Elliott has probably mangled history sufficiently to make it clear that this isn't intended to be a retelling, but if the historical parallels are likely to bother you, you may want to do more research before reading.

What I can say is that the space opera setup, while a bit stock, has all the necessary elements to make me happy. Unconquerable Sun is firmly in the "lost Earth" tradition: The Argosy fleet fled the now-mythical Celestial Empire and founded a new starfaring civilization without any contact with their original home. Eventually, they invented (or discovered; the characters don't know) the beacons, which allow for instantaneous travel between specific systems without the long (but still faster-than-light) journeys of the knnu drive. More recently, the beacon network has partly collapsed, cutting off the characters' known world from the civilization that was responsible for the beacons and guarded their secrets. It's a fun space opera history with lots of lost knowledge to reference and possibly discover, and with plot-enabling military choke points at the surviving beacons that link multiple worlds.

This is all background to the story, which is the ongoing war between Chaonia and the Phene Empire mixed with cutthroat political maneuvering between the great houses of the Chaonian Republic. This is where the heist aspects come in. Each house sends one representative to join the household of the Queen-Marshal and (more to the point for this story) another to join her heir. Sun has encouraged the individual and divergent talents of her Companions and their cee-cees (an unfortunate term that I suspect is short for Companion's Companion) and forged them into a good working team. A team that's about to be disrupted by the maneuverings of a rival house and the introduction of a new team member whom no one wants.

A problem with writing tactical geniuses is that they often aren't good viewpoint characters. Sun's tight third-person chapters, which is a little less than half the book, advance the plot and provide analysis of the interpersonal dynamics of the characters, but aren't the strength of the story. That lies with the interwoven first-person sections that follow Persephone, an altogether more interesting character.

Persephone is the scion of the house that is Sun's chief rival, but she has no interest in being part of that house or its maneuverings. When the story opens, she's a cadet in a military academy for recruits from the commoners, having run away from home, hidden her identity, and won a position through the open entrance exams. She of course doesn't stay there; her past catches up with her and she gets assigned to Sun, to a great deal of mutual suspicion. She also is assigned an impeccably dressed and stunningly beautiful cee-cee, Tiana, who has her own secrets and who was my favorite character in the book.

Somewhat unusually for the space opera tradition, this is a book that knows that common people exist and have interesting lives. It's primarily focused on the ruling houses, but that focus is not exclusive and the rulers do not have a monopoly on competence. Elliott also avoids narrowing the political field too far; the Gatoi are separate from the three rival powers, and there are other groups with traditions older than the Chaonian Republic and their own agendas. Sun and her Companions are following a couple of political threads, but there is clearly more going on in this world than that single plot.

This is exactly the kind of story I think of when I think space opera. It's not doing anything that original or groundbreaking, and it's not going to make any of my lists of great literature, but it's a fun romp with satisfyingly layered bits of lore, a large-scale setting with lots of plot potential, and (once we get through the confusing and somewhat tedious process of introducing rather too many characters in short succession) some great interpersonal dynamics. It's the kind of book in which the characters are in the middle of decisive military action in an interstellar war and are also near-teenagers competing for ratings in an ad hoc reality TV show, primarily as an excuse to create tactical distractions for Sun's latest scheme. The writing is okay but not great, and the first few chapters have some serious infodumping problems, but I thoroughly enjoyed the whole book and will pre-order the sequel.

One Amazon review complained that Unconquerable Sun is not a space opera like Hyperion or Use of Weapons. That is entirely true, but if that's your standard for space opera, the world may be a disappointing place. This is a solid entry in a subgenre I love, with some great characters, sarcasm, competence porn, plenty of pages to keep turning, a few twists, and the promise of more to come. Recommended.

Followed by the not-yet-published Furious Heaven.

Rating: 7 out of 10

Cryptogram Interview with the Author of the 2000 Love Bug Virus

No real surprises, but we finally have the story.

The story he went on to tell is strikingly straightforward. De Guzman was poor, and internet access was expensive. He felt that getting online was almost akin to a human right (a view that was ahead of its time). Getting access required a password, so his solution was to steal the passwords from those who’d paid for them. Not that de Guzman regarded this as stealing: He argued that the password holder would get no less access as a result of having their password unknowingly “shared.” (Of course, his logic conveniently ignored the fact that the internet access provider would have to serve two people for the price of one.)

De Guzman came up with a solution: a password-stealing program. In hindsight, perhaps his guilt should have been obvious, because this was almost exactly the scheme he’d mapped out in a thesis proposal that had been rejected by his college the previous year.

,

Planet DebianKees Cook: security things in Linux v5.7

Previously: v5.6

Linux v5.7 was released at the end of May. Here’s my summary of various security things that caught my attention:

arm64 kernel pointer authentication
While the ARMv8.3 CPU “Pointer Authentication” (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well.

BPF LSM
The kernel’s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting).

execve() deadlock refactoring
There have been a number of long-standing races in the kernel’s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn’t be able to.

slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED‘s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer’s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up.

RISCV page table dumping
Following v5.6’s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it’s much easier to examine the kernel’s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables.

array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don’t do this:

int foo[5];
...
foo[8] = bar;

The long version gets complicated by the evolution of “flexible array” structure members, so we’ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn’t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to “fake” flexible array members. Before flexible arrays were standardized, GNU C supported “zero sized” array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the “header” in front of some data blob that could be addressable through the last structure member:

/* 1-element array */
struct foo {
    ...
    char contents[1];
};

/* GNU C extension: 0-element array */
struct foo {
    ...
    char contents[0];
};

/* C standard: flexible array */
struct foo {
    ...
    char contents[];
};

instance = kmalloc(sizeof(struct foo) + content_size);

Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva’s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel’s deprecation docs.

However, that will only catch the “visible at compile time” cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:

int foo[5];
...
for (i = 0; i < some_argument; i++) {
    ...
    foo[i] = bar;
    ...
}

It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings.

Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:

/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);

The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:

/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);

So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes.

That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

Cryptogram Matt Blaze on OTP Radio Stations

Matt Blaze discusses (also here) an interesting mystery about a Cuban one-time-pad radio station, and a random number generator error that probably helped arrest a pair of Russian spies in the US.

Planet DebianAntoine Beaupré: PSA: Mailman used to harrass people

It seems that Mailman instances are being abused to harrass people with subscribe spam. If some random people complain to you that they "never wanted to subscribe to your mailing list", you may be a victim to that attack, even if you run the latest Mailman 2.

TL;DR: IKR! HOW DO I FIX THIS!?

Make sure you have SUBSCRIBE_FORM_SECRET set in your mailman configuration:

SECRET=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 30)'
echo "SUBSCRIBE_FORM_SECRET = '$SECRET'" >> /etc/mailman/mm.cfg

This will add a magic token to all forms in the Mailman web forms that will force the attacker to at least get a token before asking for registration. There are, of course, other ways of performing the attack then, but it's more expensive than a single request for the attacker and keeps most of the junk out.

Other solutions

I originally deployed a different fix, using referrer checks and an IP block list:

RewriteMap hosts-deny  txt:/etc/apache2/blocklist.txt
RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond %{HTTP_REFERER} !^https://lists.torproject.org/$ [NC]
RewriteRule ^/cgi-bin/mailman/subscribe/ - [F]
# see also https://www.w3.org/TR/referrer-policy/#referrer-policy-origin
Header always set Referrer-Policy "origin"

I kept those restrictions in place because it keeps the spammers from even hitting the Mailman CGI, which is useful to preserve our server resources. But if "they" escalate with smarter crawlers, the block list will still be useful.

You can use this query to extract the top 10 IP addresses used for subscription attempts:

awk '{ print $NF }' /var/log/mailman/subscribe | sort | uniq -c | sort -n | tail -10  | awk '{ print $2 " " $1 }'

Note that this might include email-based registration, but in our logs those are extremely rare: only two in three weeks, out of over 73,000 requests. I also use this to keep an eye on the logs:

tail -f  /var/log/mailman/subscribe /var/log/apache2/lists.torproject.org-access.log | grep -v 'GET /pipermail/'

The server-side mitigations might also be useful if you happen to run an extremely old version of Mailman, that is pre-2.1.18, but it's now over 6 years old and part of every supported Debian release out there (all the way back to Debian 8 jessie).

Why does that attack work?

Because Mailman 2 doesn't have CSRF tokens in its forms by default, anyone can send a POST request to /mailman/subscribe/LISTNAME to have Mailman send an email to the user. In the old "Internet is for nice people" universe, that wasn't a problem: all it does is ask the victim if they want to subscribe to LISTNAME. Innocuous, right?

But in the brave, new, post-Eternal-September, "Internet is for stupid" universe, some assholes think it's a good idea to make a form that collects hundreds of mailing list URLs and spam them through an iframe. To see what that looks like, you can look at the rendered source code behind samedyfreeday.co.uk (not linking to avoid promoting it). That site does what is basically a distributed cross-site scripting attack against Mailman servers.

Obviously, CSRF protection should be enabled by default in Mailman, but there you go. Hopefully this will help some folks...

(The latest Mailman 3 release doesn't suffer from such idiotic defaults and ships with proper CSRF protection out of the box.)

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 15)

Here’s part fifteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet DebianJonathan McDowell: Mainline Linux on the MikroTik RB3011

I upgraded my home internet connection to fibre (FTTP) last October. I’m still on an 80M/20M service, so it’s no faster than my old VDSL FTTC connection was, and as a result for a long time I continued to use my HomeHub 5A running OpenWRT. However the FTTP ONT meant I was using up an additional ethernet port on the router, and I was already short, so I ended up with a GigE switch in use as well. Also my wifi is handled by a UniFi, which takes its power via Power-over-Ethernet. That mean I had a router, a switch and a PoE injector all in close proximity. I wanted to reduce the number of devices, and ideally upgrade to something that could scale once I decide to upgrade my FTTP service speed.

Looking around I found the MikroTik RB3011UiAS-RM, which is a rack mountable device with 10 GigE ports (plus an SFP slot) and a dual core Qualcomm IPQ8064 ARM powering it. There’s 1G RAM and 128MB NAND flash, as well as a USB3 port. It also has PoE support. On paper it seemed like an ideal device. I wasn’t particularly interested in running RouterOS on it (the provided software), but that’s based on Linux and there was some work going on within OpenWRT to add support, so it seemed like a worthwhile platform to experiment with (what, you expected this to be about me buying an off the shelf device and using it with only the supplied software?). As an added bonus a friend said he had one he wasn’t using, and was happy to sell it to me for a bargain price.

RB3011 router in use

I did try out RouterOS to start with, but I didn’t find it particularly compelling. I’m comfortable configuring firewalling and routing at a Linux command line, and I run some additional services on the router like my MQTT broker, and mqtt-arp, my wifi device presence monitor. I could move things around such that they ran on the house server, but I consider them core services and as a result am happier with them on the router.

The first step was to get something booting on the router. Luckily it has an RJ45 serial console port on the back, and a reasonably featured bootloader that can manage to boot via tftp over the network. It wants an ELF binary rather than a plain kernel, but Sergey Sergeev had done the hard work of getting u-boot working for the IPQ8064, which mean I could just build normal u-boot images to try out.

Linux upstream already had basic support for a lot of the pieces I was interested in. There’s a slight fudge around AUTO_ZRELADDR because the network coprocessors want a chunk of memory at the start of RAM, but there’s ongoing discussions about how to handle this cleanly that I’m hopeful will eventually mean I can drop that hack. Serial, ethernet, the QCA8337 switches (2 sets of 5 ports, tied to different GigE devices on the processor) and the internal NOR all had drivers, so it was a matter of crafting an appropriate DTB to get them working. That left niggles.

First, the second switch is hooked up via SGMII. It turned out the IPQ806x stmmac driver didn’t initialise the clocks in this mode correctly, and neither did the qca8k switch driver. So I need to fix up both of those (Sergey had handled the stmmac driver, so I just had to clean up and submit his patch). Next it turned out the driver for talking to the Qualcomm firmware (SCM) had been updated in a way that broke the old method needed on the IPQ8064. Some git archaeology figured that one out and provided a solution. Ansuel Smith helpfully provided the DWC3 PHY driver for the USB port. That got me to the point I could put a Debian armhf image onto a USB stick and mount that as root, which made debugging much easier.

At this point I started to play with configuring up the device to actually act as a router. I make use of a number of VLANs on my home network, so I wanted to make sure I could support those. Turned out the stmmac driver wasn’t happy reconfiguring its MTU because the IPQ8064 driver doesn’t configure the FIFO sizes. I found what seem to be the correct values and plumbed them in. Then the qca8k driver only supported port bridging. I wanted the ability to have a trunk port to connect to the upstairs switch, while also having ports that only had a single VLAN for local devices. And I wanted the switch to handle this rather than requiring the CPU to bridge the traffic. Thankfully it’s easy to find a copy of the QCA8337 datasheet and the kernel Distributed Switch Architecture is pretty flexible, so I was able to implement the necessary support.

I stuck with Debian on the USB stick for actually putting the device into production. It makes it easier to fix things up if necessary, and the USB stick allows for a full Debian install which would be tricky on the 128M of internal NAND. That means I can use things like nftables for my firewalling, and use the standard Debian packages for things like collectd and mosquitto. Plus for debug I can fire up things like tcpdump or tshark. Which ended up being useful because when I put the device into production I started having weird IPv6 issues that turned out to be a lack of proper Ethernet multicast filter support in the IPQ806x ethernet device. The driver would try and setup the multicast filter for the IPv6 NDP related packets, but it wouldn’t actually work. The fix was to fall back to just receiving all multicast packets - this is what the vendor driver does.

Most of this work will be present once the 5.9 kernel is released - the basics are already in 5.8. Currently not queued up that I can think of are the following:

  • stmmac IPQ806x FIFO sizes. I sent out an RFC patch for these, but didn’t get any replies. I probably just need to submit this.
  • NAND. This is missing support for the QCOM ADM DMA engine. I’ve sent out the patch I found to enable this, and have had some feedback, so I’m hopeful it will get in at some point.
  • LCD. AFAICT LCD is an ST7735 device, which has kernel support, but I haven’t spent serious effort getting the SPI configuration to work.
  • Touchscreen. Again, this seems to be a zt2046q or similar, which has a kernel driver, but the basic attempts I’ve tried don’t get any response.
  • Proper SFP functionality. The IPQ806x has a PCS module, but the stmmac driver doesn’t have an easy way to plumb this in. I have ideas about how to get it working properly (and it can be hacked up with a fixed link config) but it’s not been a high priority.
  • Device tree additions. Some of the later bits I’ve enabled aren’t yet in the mainline RB3011 DTB. I’ll submit a patch for that at some point.

Overall I consider the device a success, and it’s been entertaining getting it working properly. I’m running a mostly mainline kernel, it’s handling my house traffic without breaking a sweat, and the fact it’s running Debian makes it nice and easy to throw more things on it as I desire. However it turned out the RB3011 isn’t as perfect device as I’d hoped. The PoE support is passive, and the UniFi wants 802.1af. So I was going to end up with 2 devices. As it happened I picked up a cheap D-Link DGS-1210-10P switch, which provides the PoE support as well as some additional switch ports. Plus it runs Linux, so more on that later…

Google AdsenseA guide to common AdSense policy questions

A guide to understand the digital advertising policies and resolving policy violations for AdSense.

Sociological ImagesSurvivors or Victims?

The #MeToo movement that began in 2017 has reignited a long debate about how to name people who have had traumatic experiences. Do we call individuals who have experienced war, cancer, crime, or sexual violence “victims”? Or should we call them “survivor,” as recent activists like #MeToo founder Tarana Burke have advocated?

Strong arguments can be raised for both sides. In the sexual violence debate, advocates of “survivor” argue the term places women at the center of their own narrative of recovery and growth. Defenders of victim language, meanwhile, argue that victim better describes the harm and seriousness of violence against women and identifies the source of violence in systemic misogyny and cultures of patriarchy.

Unfortunately, while there has been much debate about the use of these terms, there has been little documentation of how service and advocacy organizations that work with individuals who have experienced trauma actually use these terms. Understanding the use of survivor and victim is important because it tells us what these terms to mean in practice and where barriers to change are. 

We sought to remedy this problem in a recent paper published in Social Currents.  We used data from nonprofit mission statements to track language change among 3,756 nonprofits that once talked about victims in the 1990s.  We found, in general, that relatively few organizations adopted survivor as a way to talk about trauma even as some organizations have moved away from talking about victims.  However, we also found that, increasingly, organizations that focus on issues related to women tend to use victim and survivor interchangeably. In contrast, organizations that do not work with women appear be moving away from both terms.

These findings contradict the way we usually think about “survivor” and “victim” as opposing terms. Does this mean that survivor and victim are becoming the “extremely reduced form” through which women are able to enter the public sphere? Or does it mean that feminist service providers are avoiding binary thinking? These questions, as well as questions about the strategic, linguistic, and contextual reasons that organizations choose victim- or survivor-based language give advocates and scholars of language plenty to re-examine.  

Andrew Messamore is a PhD student in the Department of Sociology at the University of Texas at Austin. Andrew studies changing modes of local organizing at work and in neighborhoods and how the ways people associate shapes community, public discourse, and economic inequality in the United States.

Pamela Paxton is the Linda K. George and John Wilson Professor of Sociology at The University of Texas at Austin. With Melanie Hughes and Tiffany Barnes, she is the co-author of the 2020 book, Women, Politics, and Power: A Global Perspective.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: Exceptional Standards Compliance

When we're laying out code standards and policies, we are, in many ways, relying on "policing by consent". We are trying to establish standards for behavior among our developers, but we can only do this with their consent. This means our standards have to have clear value, have to be applied fairly and equally. The systems we build to enforce those standards are meant to reduce conflict and de-escalate disagreements, not create them.

But that doesn't mean there won't always be developers who resist following the agreed upon standards. Take, for example, Daniel's co-worker. Their CI process also runs a static analysis step against their C# code, which lets them enforce a variety of coding standards.

One of those standards is: "Catch specific exceptions. Don't catch the generic Exception type unless explicitly necessary." If it is explicitly necessary, their CI system attaches "Comments" (not code comments) to the commit, so all you need to do is click the "resolve" button and provide a brief explanation of why it was necessary.

This wouldn't be an… exceptional standard. Specific is always better than vague, and in this case, the rule isn't hard and fast: you're allowed to violate it if you can explain why.

But explaining yourself sounds like a lot of work. Wouldn't it be easier to try and fool the static analysis tool?

try { {...} } catch (Exception ex) when (ex is Exception exception) { {...} }

C#'s catch block supports a when operator, which is meant to filter based on properties of the exception. The ex is Exception exception is a pattern match and also a cast: it's true if the type of ex is Exception, and also cast it to Exception and store the cast in exception.

Or to word it another way, we catch the exception if it is of the type Exception but only when the type is Exception in which case we cast the exception which we know is an Exception to Exception and store it in exception, and I take exception to all of that.

Presumably, they'll change the rule in the CI system to exclude these, but hopefully they'll also have a talk with the developer responsible about the purpose of standards. Maybe they'll get some "standards by consent", or maybe somebody'll be looking for a new job.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom

Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217

Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities.

I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded.

I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures.

One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice.

The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor.

However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them.

The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation.

This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them.

The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment.

This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students.

It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them.

Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites).

It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn.

Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so.

This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check.

There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork.

As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way.

I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools.

Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation.

Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed.

Rating: 8 out of 10

,

Planet DebianEnrico Zini: Relationships links

The Bouletcorp » Love & Dragons is a strip I like about fairytale relationships.

There are a lot of mainstream expectations about relationships. These links challenge a few of them:

More about emotional work, some more links to follow a previous links post:

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.2: New upstream, awesome new stopwatch

Following up on the initial RcppSpdlog 0.0.1 release earlier this week, we are pumped to announce release 0.0.2. It contains upstream version 1.8.0 for spdlog which utilizes (among other things) a new feature in the embedded fmt library, namely completely automated formatting of high resolution time stamps which allows for gems like this (taken from this file in the package and edited down for brevity):

What we see is all there is: One instantiates a stopwatch object, and simply references it. The rest, as they say, is magic. And we get tic / toc alike behaviour in modern C++ at essentially no cost us (as code authors). So nice. Output from the (included in the package) function exampleRsink() (again edited down just a little):

We see that the two simple logging instances come 10 and 18 microseconds into the call.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.2 (2020-09-17)

  • Upgraded to upstream release 1.8.0

  • Switched Travis CI to using BSPM, also test on macOS

  • Added 'stopwatch' use to main R sink example

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppSpdlog page.

The only sour grapes, again, are once more over the CRAN processing. And just how 0.0.1 was delayed for no good reason for three weeks, 0.0.2 was delayed by three days just because … well that is how CRAN rules sometimes. I’d be even more mad if I had an alternative but I don’t. We remain grateful for all they do but they really could have let this one through even at one-day update delta. Ah well, now we’re three days wiser and of course nothing changed in the package.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndy Simpkins: Using an IP camera in conference calls

My webcam has broken, something that I have been using a lot during the last few months for some reason.

A friend of mine suggested that I use the mic and camera on my mobile phone instead.  There is a simple app ‘droidcam’ that makes the phone behave as a simple webcam, it also has a client application to run on your PC to capture the web-stream and present it as a video device.  All well and good but I would like to keep propitiatory software off my PCs (I have a hard enough time accepting it on a phone but I have to draw a line somewhere).

I decided that there had to be a simple way to ingest that stream and present it as a local video device on a Linux box.  It turns out that it is a lot simpler than I thought it would be.  I had it working within 10 minutes!

Packages needed:  ffmpeg, v4l2loopback-utils

sudo apt-get install ffmpeg v4l2loopback-utils

Start the loop-back device:

sudo modprobe v4l2loopback

Ingest video and present on loop-back device:

ffmpeg -re -i <URL_VideoStream> -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

Where

-re
Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss). By default ffmpeg attempts to read the
input(s) as fast as possible. This option will slow down the reading
of the input(s) to the native frame rate of the input(s). It is
useful for real-time output (e.g. live streaming).
 -i <source>
In this case my phone running droidcam http://10.14.1.100:4747/video
-vcodec rawvideo
Select the output video codec to raw video (as expected for /dev/video#)
-pix_fmt yuv420p
Set pixel format.  in this example tell ffmpeg that the video will
be yuv colour space at 420p resolution
-f v4l2 /dev/video0
Force the output to be in video for Linux 2 (v4l2) format bound to
device /dev/video0

,

Planet DebianVincent Bernat: Keepalived and unicast over multiple interfaces

Keepalived is a Linux implementation of VRRP. The usual role of VRRP is to share a virtual IP across a set of routers. For each VRRP instance, a leader is elected and gets to serve the IP address, ensuring the high availability of the attached service. Keepalived can also be used for a generic leader election, thanks to its ability to use scripts for healthchecking and run commands on state change.

A simple configuration looks like this:

vrrp_instance gateway1 {
  state BACKUP          # ❶
  interface eth0        # ❷
  virtual_router_id 12  # ❸
  priority 101          # ❹
  virtual_ipaddress {
    2001:db8:ff/64
  }
}

The state keyword in ❶ instructs Keepalived to not take the leader role when starting. Otherwise, incoming nodes create a temporary disruption by taking over the IP address until the election settles. The interface keyword in ❷ defines the interface for sending and receiving VRRP packets. It is also the default interface to configure the virtual IP address. The virtual_router_id directive in ❸ is common to all nodes sharing the virtual IP. The priority keyword in ❹ helps choosing which router will be elected as leader. If you need more information around Keepalived, be sure to check the documentation.

VRRP design is tied to Ethernet networks and requires a multicast-enabled network for communication between nodes. In some environments, notably public clouds, multicast is unavailable. In this case, Keepalived can send VRRP packets using unicast:

vrrp_instance gateway1 {
  state BACKUP
  interface eth0
  virtual_router_id 12
  priority 101
  unicast_peer {
    2001:db8::11
    2001:db8::12
  }
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Another process, like a BGP daemon, should advertise the virtual IP address to the “network”. If needed, Keepalived can trigger whatever action is needed for this by using notify_* scripts.

Until version 2.21 (not released yet), the interface directive is mandatory and Keepalived will transmit and receive VRRP packets on this interface only. If peers are reachable through several interfaces, like on a BGP on the host setup, you need a workaround. A simple one is to use a VXLAN interface:

$ ip -6 link add keepalived6 type vxlan id 6 dstport 4789 local 2001:db8::10 nolearning
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::11
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::12
$ ip link set up dev keepalived6

Learning of MAC addresses is disabled and one generic entry for each peer is added in the forwarding database: transmitted packets are broadcasted to all peers, notably VRRP packets. Have a look at “VXLAN & Linux” for additional details.

vrrp_instance gateway1 {
  state BACKUP
  interface keepalived6
  mcast_src_ip 2001:db8::10
  virtual_router_id 12
  priority 101
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Starting from Keepalived 2.21, unicast_peer can be used without the interface directive. I think using VXLAN is still a neat trick applicable to other situations where communication using broadcast or multicast is needed, while the underlying network provide no support for this.

Rondam RamblingsGame over for the USA

I would like to think that Ruth Bader Ginsberg's untimely passing is not the catastrophe that it appears to be.  I would like to think that Mitch McConnell is a man of principle, and having once said that the Senate should not confirm a Supreme Court justice in an election year he will not brazenly expose himself as a hypocrite and confirm a Supreme Court justice in an election year.I would like

Planet DebianVincent Bernat: Syncing NetBox with a custom Ansible module

The netbox.netbox collection from Ansible Galaxy provides several modules to update NetBox objects:

- name: create a device in NetBox
  netbox_device:
    netbox_url: http://netbox.local
    netbox_token: s3cret
    data:
      name: to3-p14.sfo1.example.com
      device_type: QFX5110-48S
      device_role: Compute Switch
      site: SFO1

However, if NetBox is not your source of truth, you may want to ensure it stays in sync with your configuration management database1 by removing outdated devices or IP addresses. While it should be possible to glue together a playbook with a query, a loop and some filtering to delete unwanted elements, it feels clunky, inefficient and an abuse of YAML as a programming language. A specific Ansible module solves this issue and is likely more flexible.

Notice

I recommend that you read “Writing a custom Ansible module� as an introduction, as well as “Syncing MySQL tables� for a first simpler example.

Code​

The module has the following signature and it syncs NetBox with the content of the provided YAML file:

netbox_sync:
  source: netbox.yaml
  api: https://netbox.example.com
  token: s3cret

The synchronized objects are:

  • sites,
  • manufacturers,
  • device types,
  • device roles,
  • devices, and
  • IP addresses.

In our environment, the YAML file is generated from our configuration management database and contains a set of devices and a list of IP addresses:

devices:
  ad2-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Cisco
     model: Catalyst 2960G-48TC-L
     role: net_tor_oob_switch
  to1-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Juniper
     model: QFX5110-48S
     role: net_tor_gpu_switch
# […]
ips:
  - device: ad2-p6.example.com
    ip: 172.31.115.18/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.115.33/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.254.33/32
    interface: lo0.0
# […]

The network team is not the sole tenant in NetBox. While adding new objects or modifying existing ones should be relatively safe, deleting unwanted objects can be risky. The module only deletes objects it did create or modify. To identify them, it marks them with a specific tag, cmdb. Most objects in NetBox accept tags.

Module definition​

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    source=dict(type='path', required=True),
    api=dict(type='str', required=True),
    token=dict(type='str', required=True, no_log=True),
    max_workers=dict(type='int', required=False, default=10)
)

result = dict(
    changed=False
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

It contains an additional optional arguments defining the number of workers to talk to NetBox and query the existing objects in parallel to speedup the execution.

Abstracting synchronization​

We need to synchronize different object types, but once we have a list of objects we want in NetBox, the grunt work is always the same:

  • check if the objects already exist,
  • retrieve them and put them in a form suitable for comparison,
  • retrieve the extra objects we don’t want anymore,
  • compare the two sets, and
  • add missing objects, update existing ones, delete extra ones.

We code these behaviours into a Synchronizer abstract class. For each kind of object, a concrete class is built with the appropriate class attributes to tune its behaviour and a wanted() method to provide the objects we want.

I am not explaining the abstract class code here. Have a look at the source if you want.

Synchronizing tags and tenants​

As a starter, here is how we define the class synchronizing the tags:

class SyncTags(Synchronizer):
    app = "extras"
    table = "tags"
    key = "name"

    def wanted(self):
        return {"cmdb": dict(
            slug="cmdb",
            color="8bc34a",
            description="synced by network CMDB")}

The app and table attributes defines the NetBox objects we want to manipulate. The key attribute is used to determine how to lookup for existing objects. In this example, we want to lookup tags using their names.

The wanted() method is expected to return a dictionary mapping object keys to the list of wanted attributes. Here, the keys are tag names and we create only one tag, cmdb, with the provided slug, color and description. This is the tag we will use to mark the objects we create or modify.

If the tag does not exist, it is created. If it exists, the provided attributes are updated. Other attributes are left untouched.

We also want to create a specific tenant for objects accepting such an attribute (devices and IP addresses):

class SyncTenants(Synchronizer):
    app = "tenancy"
    table = "tenants"
    key = "name"

    def wanted(self):
        return {"Network": dict(slug="network",
                                description="Network team")}

Synchronizing sites​

We also need to synchronize the list of sites. This time, the wanted() method uses the information provided in the YAML file: it walks the devices and builds a set of datacenter names.

class SyncSites(Synchronizer):

    app = "dcim"
    table = "sites"
    key = "name"
    only_on_create = ("status", "slug")

    def wanted(self):
        result = set(details["datacenter"]
                     for details in self.source['devices'].values()
                     if "datacenter" in details)
        return {k: dict(slug=k,
                        status="planned")
                for k in result}

Thanks to the use of the only_on_create attribute, the specified attributes are not updated if they are different. The goal of this synchronizer is mostly to collect the references to the different sites for other objects.

>>> pprint(SyncSites(**sync_args).wanted())
{'sfo1': {'slug': 'sfo1', 'status': 'planned'},
 'chi1': {'slug': 'chi1', 'status': 'planned'},
 'nyc1': {'slug': 'nyc1', 'status': 'planned'}}

Synchronizing manufacturers, device types and device roles​

The synchronization of manufacturers is pretty similar, except we do not use the only_on_create attribute:

class SyncManufacturers(Synchronizer):

    app = "dcim"
    table = "manufacturers"
    key = "name"

    def wanted(self):
        result = set(details["manufacturer"]
                     for details in self.source['devices'].values()
                     if "manufacturer" in details)
        return {k: {"slug": slugify(k)}
                for k in result}

Regarding the device types, we use the foreign attribute linking a NetBox attribute to the synchronizer handling it.

class SyncDeviceTypes(Synchronizer):

    app = "dcim"
    table = "device_types"
    key = "model"
    foreign = {"manufacturer": SyncManufacturers}

    def wanted(self):
        result = set((details["manufacturer"], details["model"])
                     for details in self.source['devices'].values()
                     if "model" in details)
        return {k[1]: dict(manufacturer=k[0],
                           slug=slugify(k[1]))
                for k in result}

The wanted() method refers to the manufacturer using its key attribute. In this case, this is the manufacturer name.

>>> pprint(SyncManufacturers(**sync_args).wanted())
{'Cisco': {'slug': 'cisco'},
 'Dell': {'slug': 'dell'},
 'Juniper': {'slug': 'juniper'}}
>>> pprint(SyncDeviceTypes(**sync_args).wanted())
{'ASR 9001': {'manufacturer': 'Cisco', 'slug': 'asr-9001'},
 'Catalyst 2960G-48TC-L': {'manufacturer': 'Cisco',
                           'slug': 'catalyst-2960g-48tc-l'},
 'MX10003': {'manufacturer': 'Juniper', 'slug': 'mx10003'},
 'QFX10002-36Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-36q'},
 'QFX10002-72Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-72q'},
 'QFX5110-32Q': {'manufacturer': 'Juniper', 'slug': 'qfx5110-32q'},
 'QFX5110-48S': {'manufacturer': 'Juniper', 'slug': 'qfx5110-48s'},
 'QFX5200-32C': {'manufacturer': 'Juniper', 'slug': 'qfx5200-32c'},
 'S4048-ON': {'manufacturer': 'Dell', 'slug': 's4048-on'},
 'S6010-ON': {'manufacturer': 'Dell', 'slug': 's6010-on'}}

The device roles are defined like this:

class SyncDeviceRoles(Synchronizer):

    app = "dcim"
    table = "device_roles"
    key = "name"

    def wanted(self):
        result = set(details["role"]
                     for details in self.source['devices'].values()
                     if "role" in details)
        return {k: dict(slug=slugify(k),
                        color="8bc34a")
                for k in result}

Synchronizing devices​

A device is mostly a name with references to a role, a model, a datacenter and a tenant. These references are declared as foreign keys using the synchronizers defined previously.

class SyncDevices(Synchronizer):
    app = "dcim"
    table = "devices"
    key = "name"
    foreign = {"device_role": SyncDeviceRoles,
               "device_type": SyncDeviceTypes,
               "site": SyncSites,
               "tenant": SyncTenants}
    remove_unused = 10

    def wanted(self):
        return {name: dict(device_role=details["role"],
                           device_type=details["model"],
                           site=details["datacenter"],
                           tenant="Network")
                for name, details in self.source['devices'].items()
                if {"datacenter", "model", "role"} <= set(details.keys())}

The remove_unused attribute is a safety implemented to fail if we have to delete more than 10 devices: this may be the indication there is a bug somewhere, unless one of your datacenter suddenly caught fire.

>>> pprint(SyncDevices(**sync_args).wanted())
{'ad2-p6.sfo1.example.com': {'device_role': 'net_tor_oob_switch',
                             'device_type': 'Catalyst 2960G-48TC-L',
                             'site': 'sfo1',
                             'tenant': 'Network'},
 'to1-p6.sfo1.example.com': {'device_role': 'net_tor_gpu_switch',
                             'device_type': 'QFX5110-48S',
                             'site': 'sfo1',
                             'tenant': 'Network'},
[…]

Synchronizing IP addresses​

The last step is to synchronize IP addresses. We do not attach them to a device.2 Instead, we specify the device names in the description of the IP address:

class SyncIPs(Synchronizer):
    app = "ipam"
    table = "ip-addresses"
    key = "address"
    foreign = {"tenant": SyncTenants}
    remove_unused = 1000

    def wanted(self):
        wanted = {}
        for details in self.source['ips']:
            if details['ip'] in wanted:
                wanted[details['ip']]['description'] = \
                    f"{details['device']} (and others)"
            else:
                wanted[details['ip']] = dict(
                    tenant="Network",
                    status="active",
                    dns_name="",        # information is present in DNS
                    description=f"{details['device']}: {details['interface']}",
                    role=None,
                    vrf=None)
        return wanted

There is a slight difficulty: NetBox allows duplicate IP addresses, so a simple lookup is not enough. In case of multiple matches, we choose the best by preferring those tagged with cmdb, then those already attached to an interface:

def get(self, key):
    """Grab IP address from NetBox."""
    # There may be duplicate. We need to grab the "best".
    results = super(Synchronizer, self).get(key)
    if len(results) == 0:
        return None
    if len(results) == 1:
        return results[0]
    scores = [0]*len(results)
    for idx, result in enumerate(results):
        if "cmdb" in result.tags:
            scores[idx] += 10
        if result.interface is not None:
            scores[idx] += 5
    return sorted(zip(scores, results),
                  reverse=True, key=lambda k: k[0])[0][1]

Getting the current and wanted states​

Each synchronizer is initialized with a reference to the Ansible module, a reference to a pynetbox’s API object, the data contained in the provided YAML file and two empty dictionaries for the current and expected states:

source = yaml.safe_load(open(module.params['source']))
netbox = pynetbox.api(module.params['api'],
                      token=module.params['token'])

sync_args = dict(
    module=module,
    netbox=netbox,
    source=source,
    before={},
    after={}
)
synchronizers = [synchronizer(**sync_args) for synchronizer in [
    SyncTags,
    SyncTenants,
    SyncSites,
    SyncManufacturers,
    SyncDeviceTypes,
    SyncDeviceRoles,
    SyncDevices,
    SyncIPs
]]

Each synchronizer has a prepare() method whose goal is to compute the current and wanted states. It returns True in case of a difference:

# Check what needs to be synchronized
try:
    for synchronizer in synchronizers:
        result['changed'] |= synchronizer.prepare()
except AnsibleError as e:
    result['msg'] = e.message
    module.fail_json(**result)

Applying changes​

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between these states. Each synchronizer registers the current and wanted states in sync_args["before"][table] and sync_args["after"][table] where table is the name of the table for a given NetBox object type. The diff object is a bit elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

# Compute the diff
if module._diff and result['changed']:
    result['diff'] = [
        dict(
            before_header=table,
            after_header=table,
            before=yaml.safe_dump(sync_args["before"][table]),
            after=yaml.safe_dump(sync_args["after"][table]))
        for table in sync_args["after"]
        if sync_args["before"][table] != sync_args["after"][table]
    ]

# Stop here if check mode is enabled or if no change
if module.check_mode or not result['changed']:
    module.exit_json(**result)

Each synchronizer also exposes a synchronize() method to apply changes and a cleanup() method to delete unwanted objects. Order is important due to the relation between the objects.

# Synchronize
for synchronizer in synchronizers:
    synchronizer.synchronize()
for synchronizer in synchronizers[::-1]:
    synchronizer.cleanup()
module.exit_json(**result)

The complete code is available on GitHub. Compared to using netbox.netbox collection, the logic is written in Python instead of trying to glue Ansible tasks together. I believe this is both more flexible and easier to read, notably when trying to delete outdated objects. While I did not test it, it should also be faster. An alternative would have been to reuse code from the netbox.netbox collection, as it contains similar primitives. Unfortunately, I didn’t think of it until now. 😶


  1. In my opinion, a good option for a source of truth is to use YAML files in a Git repository. You get versioning for free and people can get started with a text editor. ↩�

  2. This limitation is mostly due to laziness: we do not really care about this information. Our main motivation for putting IP addresses in NetBox is to keep track of the used IP addresses. However, if an IP address is already attached to an interface, we leave this association untouched. ↩�

Planet DebianBits from Debian: New Debian Maintainers (July and August 2020)

The following contributors were added as Debian Maintainers in the last two months:

  • Chirayu Desai
  • Shayan Doust
  • Arnaud Ferraris
  • Fritz Reichwald
  • Kartik Kulkarni
  • François Mazen
  • Patrick Franz
  • Francisco Vilmar Cardoso Ruviaro
  • Octavio Alvarez
  • Nick Black

Congratulations!

Planet DebianRussell Coker: Burning Lithium Ion Batteries

I had an old Nexus 4 phone that was expanding and decided to test some of the theories about battery combustion.

The first claim that often gets made is that if the plastic seal on the outside of the battery is broken then the battery will catch fire. I tested this by cutting the battery with a craft knife. With every cut the battery sparked a bit and then when I levered up layers of the battery (it seems to be multiple flat layers of copper and black stuff inside the battery) there were more sparks. The battery warmed up, it’s plausible that in a confined environment that could get hot enough to set something on fire. But when the battery was resting on a brick in my backyard that wasn’t going to happen.

The next claim is that a Li-Ion battery fire will be increased with water. The first thing to note is that Li-Ion batteries don’t contain Lithium metal (the Lithium high power non-rechargeable batteries do). Lithium metal will seriously go off it exposed to water. But lots of other Lithium compounds will also react vigorously with water (like Lithium oxide for example). After cutting through most of the center of the battery I dripped some water in it. The water boiled vigorously and the corners of the battery (which were furthest away from the area I cut) felt warmer than they did before adding water. It seems that significant amounts of energy are released when water reacts with whatever is inside the Li-Ion battery. The reaction was probably giving off hydrogen gas but didn’t appear to generate enough heat to ignite hydrogen (which is when things would really get exciting). Presumably if a battery was cut in the presence of water while in an enclosed space that traps hydrogen then the sparks generated by the battery reacting with air could ignite hydrogen generated from the water and give an exciting result.

It seems that a CO2 fire extinguisher would be best for a phone/tablet/laptop fire as that removes oxygen and cools it down. If that isn’t available then a significant quantity of water will do the job, water won’t stop the reaction (it can prolong it), but it can keep the reaction to below 100C which means it won’t burn a hole in the floor and the range of toxic chemicals released will be reduced.

The rumour that a phone fire on a plane could do a “China syndrome” type thing and melt through the Aluminium body of the plane seems utterly bogus. I gave it a good try and was unable to get a battery to burn through it’s plastic and metal foil case. A spare battery for a laptop in checked luggage could be a major problem for a plane if it ignited. But a battery in the passenger area seems unlikely to be a big problem if plenty of water is dumped on it to prevent the plastic case from burning and polluting the air.

I was not able to get a result that was even worthy of a photograph. I may do further tests with laptop batteries.

Planet DebianJelmer Vernooij: Debian Janitor: Expanding Into Improving Multi-Arch

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As of dpkg 1.16.2 and apt 0.8.13, Debian has full support for multi-arch. To quote from the multi-arch implementation page:

Multiarch lets you install library packages from multiple architectures on the same machine. This is useful in various ways, but the most common is installing both 64 and 32- bit software on the same machine and having dependencies correctly resolved automatically. In general you can have libraries of more than one architecture installed together and applications from one architecture or another installed as alternatives.

The Multi-Arch specification describes a new Multi-Arch header which can be used to indicate how to resolve cross-architecture dependencies.

The existing Debian Multi-Arch hinter is a version of dedup.debian.net that compares binary packages between architectures and suggests fixes to resolve multi-arch problems. It provides hints as to what Multi- Arch fields can be set, allowing the packages to be safely installed in a Multi-Arch world. The full list of almost 10,000 hints generated by the hinter is available at https://dedup.debian.net/static/multiarch-hints.yaml.

Recent versions of lintian-brush now include a command called apply-multiarch-hints that downloads and locally caches the hints and can apply them to a package maintained in Git. For example, to apply multi-arch hints to autosize.js:

 $ debcheckout autosize.js
 declared git repository at https://salsa.debian.org/js-team/autosize.js.git
 git clone https://salsa.debian.org/js-team/autosize.js.git autosize.js ...
 Cloning into 'autosize.js'...
 [...]
 $ cd autosize.js
 $ apply-multiarch-hints
 Downloading new version of multi-arch hints.
 libjs-autosize: Add Multi-Arch: foreign.
 node-autosize: Add Multi-Arch: foreign.
 $ git log -p
 commit 3f8d1db5af4a87e6ebb08f46ddf79f6adf4e95ae (HEAD -> master)
 Author: Jelmer Vernooij <jelmer@debian.org>
 Date:   Fri Sep 18 23:37:14 2020 +0000

     Apply multi-arch hints.
     + libjs-autosize, node-autosize: Add Multi-Arch: foreign.

     Changes-By: apply-multiarch-hints

 diff --git a/debian/changelog b/debian/changelog
 index e7fa120..09af4a7 100644
 --- a/debian/changelog
 +++ b/debian/changelog
 @@ -1,3 +1,10 @@
 +autosize.js (4.0.2~dfsg1-5) UNRELEASED; urgency=medium
 +
 +  * Apply multi-arch hints.
 +    + libjs-autosize, node-autosize: Add Multi-Arch: foreign.
 +
 + -- Jelmer Vernooij <jelmer@debian.org>  Fri, 18 Sep 2020 23:37:14 -0000
 +
  autosize.js (4.0.2~dfsg1-4) unstable; urgency=medium

    * Team upload
 diff --git a/debian/control b/debian/control
 index 01ca968..fbba1ae 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -20,6 +20,7 @@ Architecture: all
  Depends: ${misc:Depends}
  Recommends: javascript-common
  Breaks: ruby-rails-assets-autosize (<< 4.0)
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - NodeJS
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,
 @@ -32,6 +33,7 @@ Package: node-autosize
  Architecture: all
  Depends: ${misc:Depends}
   , nodejs
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - Javascript
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,

The Debian Janitor also has a new multiarch-fixes suite that runs apply-multiarch-hints across packages in the archive and proposes merge requests. For example, you can see the merge request against autosize.js here.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

,

Cryptogram CEO of NS8 Charged with Securities Fraud

The founder and CEO of the Internet security company NS8 has been arrested and “charged in a Complaint in Manhattan federal court with securities fraud, fraud in the offer and sale of securities, and wire fraud.”

I admit that I’ve never even heard of the company before.

Planet DebianDaniel Lange: Fixing the Nextcloud menu to show more than eight application icons

I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.

There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.

And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:

Screenshot of stock Nextcloud menu

Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js and you probably don't want to build the whole beast locally to change one constant.

Basically

const breakpoint_mobile_width = 1024;

const resizeMenu = () => {
    const appList = $('#appmenu li')
    const rightHeaderWidth = $('.header-right').outerWidth()
    const headerWidth = $('header').outerWidth()
    const usePercentualAppMenuLimit = 0.33
    const minAppsDesktop = 8
    let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
    const isMobile = $(window).width() < breakpoint_mobile_width
    if (!isMobile) {
        availableWidth = availableWidth * usePercentualAppMenuLimit
    }
    let appCount = Math.floor((availableWidth / $(appList).width()))
    if (isMobile && appCount > minAppsDesktop) {
        appCount = minAppsDesktop
    }
    if (!isMobile && appCount < minAppsDesktop) {
        appCount = minAppsDesktop
    }

    // show at least 2 apps in the popover
    if (appList.length - 1 - appCount >= 1) {
        appCount--
    }

    $('#more-apps a').removeClass('active')
    let lastShownApp
    for (let k = 0; k < appList.length - 1; k++) {
        const name = $(appList[k]).data('id')
        if (k < appCount) {
            $(appList[k]).removeClass('hidden')
            $('#apps li[data-id=' + name + ']').addClass('in-header')
            lastShownApp = appList[k]
        } else {
            $(appList[k]).addClass('hidden')
            $('#apps li[data-id=' + name + ']').removeClass('in-header')
            // move active app to last position if it is active
            if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
                $(lastShownApp).addClass('hidden')
                $('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
                $(appList[k]).removeClass('hidden')
                $('#apps li[data-id=' + name + ']').addClass('in-header')
            }
        }
    }

    // show/hide more apps icon
    if ($('#apps li:not(.in-header)').length === 0) {
        $('#more-apps').hide()
        $('#navigation').hide()
    } else {
        $('#more-apps').show()
    }
}

gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:

var f=function(){var e=s()("#appmenu li"),t=s()(".header-right").outerWidth(),n=s()("header").outerWidth()-s()("#nextcloud").outerWidth()-(t>210?t:210),i=s()(window).width()<1024;i||(n*=.33);var r,o=Math.floor(n/s()(e).width());i&&o>8&&(o=8),!i&&o<8&&(o=8),e.length-1-o>=1&&o--,s()("#more-apps a").removeClass("active");for(var a=0;a<e.length-1;a++){var l=s()(e[a]).data("id");a<o?(s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header"),r=e[a]):(s()(e[a]).addClass("hidden"),s()("#apps li[data-id="+l+"]").removeClass("in-header"),o>0&&s()(e[a]).children("a").hasClass("active")&&(s()(r).addClass("hidden"),s()("#apps li[data-id="+s()(r).data("id")+"]").removeClass("in-header"),s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header")))}0===s()("#apps li:not(.in-header)").length?(s()("#more-apps").hide(),s()("#navigation").hide()):s()("#more-apps").show()}

Well, we can still patch that, can we?

Continue reading "Fixing the Nextcloud menu to show more than eight application icons"

Planet DebianSven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard.

The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master.

gh functions and aliases

# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"

#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"

### github support functions, most invoked with a PR ID as $1

#primary function to review PRs
function ghs {
    gh pr view ${1}
    gh pr checks ${1}
    gh pr diff ${1}
}

# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam"
}

# merge a PR and update master if we're not in a different branch
function ghm {
    gh pr merge -d -r ${1}
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
}

# get an overview over the files changed in a PR
function ghf {
    gh pr diff ${1} | diffstat -l
}

# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink {
    local repo="$(git remote -v | grep -E "github.+push" | cut -d':' -f 2 | cut -d'.' -f 1)"
    echo "https://github.com/${repo}/commit/${1}"
}

wtfutil

I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:

github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github

The -label:dependencies is used here to filter out dependabot PRs in the dashboard.

Workflow

Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard.

Security Considerations

The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

Planet DebianDaniel Lange: Getting rid of the Google cookie consent popup

If you clear your browser cookies regularly (as you should do), Google will annoy you with a full screen cookie consent overlay these days. And - of course - there is no "no tracking consent, technically required cookies only" button. You may log in to Google to set your preference. Yeah, I'm sure this is totally following the intent of the EU Directive 2009/136/EC (the "cookie law").

Google cookie consent pop-up

Unfortunately none of the big "anti-annoyances" filter lists seem to have picked that one up yet but the friendly folks from the Computerbase Forum [German] to the rescue. User "Sepp Depp" has created the following filter set that WFM:

Add this to your uBlock Origin "My filters" tab:

! Google - remove cookie-consent-popup and restore scroll functionality
google.*##.wwYr3.aID8W.bErdLd
google.*##.aID8W.m114nf.t7xA6
google.*##div[jsname][jsaction^="dg_close"]
google.*##html:style(overflow: visible !important;)
google.*##.widget-consent-fullscreen.widget-consent

Planet Linux AustraliaFrancois Marier: Setting up and testing an NPR modem on Linux

After acquiring a pair of New Packet Radio modems on behalf of VECTOR, I set it up on my Linux machine and ran some basic tests to check whether it could achieve the advertised 500 kbps transfer rates, which are much higher than AX25) packet radio.

The exact equipment I used was:

Radio setup

After connecting the modems to the power supply and their respective antennas, I connected both modems to my laptop via micro-USB cables and used minicom to connect to their console on /dev/ttyACM[01]:

minicom -8 -b 921600 -D /dev/ttyACM0
minicom -8 -b 921600 -D /dev/ttyACM1

To confirm that the firmware was the latest one, I used the following command:

ready> version
firmware: 2020_02_23
freq band: 70cm

then I immediately turned off the radio:

radio off

which can be verified with:

status

Following the British Columbia 70 cm band plan, I picked the following frequency, modulation (bandwidth of 360 kHz), and power (0.05 W):

set frequency 433.500
set modulation 22
set RF_power 7

and then did the rest of the configuration for the master:

set callsign VA7GPL_0
set is_master yes
set DHCP_active no
set telnet_active no

and the client:

set callsign VA7GPL_1
set is_master no
set DHCP_active yes
set telnet_active no

and that was enough to get the two modems to talk to one another.

On both of them, I ran the following:

save
reboot

and confirmed that they were able to successfully connect to each other:

who

Monitoring RF

To monitor what is happening on the air and quickly determine whether or not the modems are chatting, you can use a software-defined radio along with gqrx with the following settings:

frequency: 433.500 MHz
filter width: user (80k)
filter shape: normal
mode: Raw I/Q

I found it quite helpful to keep this running the whole time I was working with these modems. The background "keep alive" sounds are quite distinct from the heavy traffic sounds.

IP setup

The radio bits out of the way, I turned to the networking configuration.

On the master, I set the following so that I could connect the master to my home network (192.168.1.0/24) without conflicts:

set def_route_active yes
set DNS_active no
set modem_IP 192.168.1.254
set IP_begin 192.168.1.225
set master_IP_size 29
set netmask 255.255.255.0

(My router's DHCP server is configured to allocate dynamic IP addresses from 192.168.1.100 to 192.168.1.224.)

At this point, I connected my laptop to the client using a CAT-5 network cable and the master to the ethernet switch, essentially following Annex 5 of the Advanced User Guide.

My laptop got assigned IP address 192.168.1.225 and so I used another computer on the same network to ping my laptop via the NPR modems:

ping 192.168.1.225

This gave me a round-trip time of around 150-250 ms.

Performance test

Having successfully established an IP connection between the two machines, I decided to run a quick test to measure the available bandwidth in an ideal setting (i.e. the two antennas very close to each other).

On both computers, I installed iperf:

apt install iperf

and then setup the iperf server on my desktop computer:

sudo iptables -A INPUT -s 192.168.1.0/24 -p TCP --dport 5001 -j ACCEPT
sudo iptables -A INPUT -s 192.168.1.0/24 -u UDP --dport 5001 -j ACCEPT
iperf --server

On the laptop, I set the MTU to 750 in NetworkManager:

and restarted the network.

Then I created a new user account (npr with a uid of 1001):

sudo adduser npr

and made sure that only that account could access the network by running the following as root:

# Flush all chains.
iptables -F

# Set defaults policies.
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# Don't block localhost and ICMP traffic.
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Don't re-evaluate already accepted connections.
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

# Allow connections to/from the test user.
iptables -A OUTPUT -m owner --uid-owner 1001 -m conntrack --ctstate NEW -j ACCEPT

# Log anything that gets blocked.
iptables -A INPUT -j LOG
iptables -A OUTPUT -j LOG
iptables -A FORWARD -j LOG

then I started the test as the npr user:

sudo -i -u npr
iperf --client 192.168.1.8

Results

The results were as good as advertised both with modulation 22 (360 kHz bandwidth):

$ iperf --client 192.168.1.8 --time 30
------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58462 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-34.5 sec  1.12 MBytes   274 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58468 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-42.5 sec  1.12 MBytes   222 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58484 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-38.5 sec  1.12 MBytes   245 Kbits/sec

and modulation 24 (1 MHz bandwitdh):

$ iperf --client 192.168.1.8 --time 30
------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58148 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-31.1 sec  1.88 MBytes   506 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58246 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.5 sec  2.00 MBytes   550 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58292 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  2.00 MBytes   559 Kbits/sec

Worse Than FailureError'd: Just a Trial Run

"How does Netflix save money when making their original series? It's simple. They just use trial versions of VFX software," Nick L. wrote.

 

Chris A. writes, "Why get low quality pixelated tickets when you can have these?"

 

"Better make room! This USB disk enclosure is ready for supper and som really mega-bytes!" wrote Stuart L.

 

Scott writes, "Go Boncos!"

 

"With rewards like these, I can't believe more people don't pledge on Patreon!" writes Chris A.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityChinese Antivirus Firm Was Part of APT41 ‘Supply Chain’ Attack

The U.S. Justice Department this week indicted seven Chinese nationals for a decade-long hacking spree that targeted more than 100 high-tech and online gaming companies. The government alleges the men used malware-laced phishing emails and “supply chain” attacks to steal data from companies and their customers. One of the alleged hackers was first profiled here in 2012 as the owner of a Chinese antivirus firm.

Image: FBI

Charging documents say the seven men are part of a hacking group known variously as “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider.” Once inside of a target organization, the hackers stole source code, software code signing certificates, customer account data and other information they could use or resell.

APT41’s activities span from the mid-2000s to the present day. Earlier this year, for example, the group was tied to a particularly aggressive malware campaign that exploited recent vulnerabilities in widely-used networking products, including flaws in Cisco and D-Link routers, as well as Citrix and Pulse VPN appliances. Security firm FireEye dubbed that hacking blitz “one of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years.”

The government alleges the group monetized its illicit access by deploying ransomware and “cryptojacking” tools (using compromised systems to mine cryptocurrencies like Bitcoin). In addition, the gang targeted video game companies and their customers in a bid to steal digital items of value that could be resold, such as points, powers and other items that could be used to enhance the game-playing experience.

APT41 was known to hide its malware inside fake resumes that were sent to targets. It also deployed more complex supply chain attacks, in which they would hack a software company and modify the code with malware.

“The victim software firm — unaware of the changes to its product, would subsequently distribute the modified software to its third-party customers, who were thereby defrauded into installing malicious software code on their own computers,” the indictments explain.

While the various charging documents released in this case do not mention it per se, it is clear that members of this group also favored another form of supply chain attacks — hiding their malware inside commercial tools they created and advertised as legitimate security software and PC utilities.

One of the men indicted as part of APT41 — now 35-year-old Tan DaiLin — was the subject of a 2012 KrebsOnSecurity story that sought to shed light on a Chinese antivirus product marketed as Anvisoft. At the time, the product had been “whitelisted” or marked as safe by competing, more established antivirus vendors, although the company seemed unresponsive to user complaints and to questions about its leadership and origins.

Tan DaiLin, a.k.a. “Wicked Rose,” in his younger years. Image: iDefense

Anvisoft claimed to be based in California and Canada, but a search on the company’s brand name turned up trademark registration records that put Anvisoft in the high-tech zone of Chengdu in the Sichuan Province of China.

A review of Anvisoft’s website registration records showed the company’s domain originally was created by Tan DaiLin, an infamous Chinese hacker who went by the aliases “Wicked Rose” and “Withered Rose.” At the time of story, DaiLin was 28 years old.

That story cited a 2007 report (PDF) from iDefense, which detailed DaiLin’s role as the leader of a state-sponsored, four-man hacking team called NCPH (short for Network Crack Program Hacker). According to iDefense, in 2006 the group was responsible for crafting a rootkit that took advantage of a zero-day vulnerability in Microsoft Word, and was used in attacks on “a large DoD entity” within the USA.

“Wicked Rose and the NCPH hacking group are implicated in multiple Office based attacks over a two year period,” the iDefense report stated.

When I first scanned Anvisoft at Virustotal.com back in 2012, none of the antivirus products detected it as suspicious or malicious. But in the days that followed, several antivirus products began flagging it for bundling at least two trojan horse programs designed to steal passwords from various online gaming platforms.

Security analysts and U.S. prosecutors say APT41 operated out of a Chinese enterprise called Chengdu 404 that purported to be a network technology company but which served a legal front for the hacking group’s illegal activities, and that Chengdu 404 used its global network of compromised systems as a kind of dragnet for information that might be useful to the Chinese Communist Party.

Chengdu404’s offices in China. Image: DOJ.

“CHENGDU 404 developed a ‘big data’ product named ‘SonarX,’ which was described…as an ‘Information Risk Assessment System,'” the government’s indictment reads. “SonarX served as an easily searchable repository for social media data that previously had been obtained by CHENGDU 404.”

The group allegedly used SonarX to search for individuals linked to various Hong Kong democracy and independence movements, and snoop on a U.S.-backed media outlet that ran stories examining the Chinese government’s treatment of Uyghur people living in its Xinjian region.

As noted by TechCrunch, after the indictments were filed prosecutors said they obtained warrants to seize websites, domains and servers associated with the group’s operations, effectively shutting them down and hindering their operations.

“The alleged hackers are still believed to be in China, but the allegations serve as a ‘name and shame’ effort employed by the Justice Department in recent years against state-backed cyber attackers,” wrote TechCrunch’s Zack Whittaker.

Cryptogram Amazon Delivery Drivers Hacking Scheduling System

Amazon drivers — all gig workers who don’t work for the company — are hanging cell phones in trees near Amazon delivery stations, fooling the system into thinking that they are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to multiple nearby drivers in on the plot, according to drivers who have observed the process. They believe an unidentified person or entity is acting as an intermediary between Amazon and the drivers and charging drivers to secure more routes, which is against Amazon’s policies.

The perpetrators likely dangle multiple phones in the trees to spread the work around to multiple Amazon Flex accounts and avoid detection by Amazon, said Chetan Sharma, a wireless industry consultant. If all the routes were fed through one device, it would be easy for Amazon to detect, he said.

“They’re gaming the system in a way that makes it harder for Amazon to figure it out,” Sharma said. “They’re just a step ahead of Amazon’s algorithm and its developers.”

Cryptogram Friday Squid Blogging: Nano-Sized SQUIDS

SQUID news:

Physicists have developed a small, compact superconducting quantum interference device (SQUID) that can detect magnetic fields. The team l focused on the instrument’s core, which contains two parallel layers of graphene.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianRussell Coker: Dell BIOS Updates

I have just updated the BIOS on a Dell PowerEdge T110 II. The process isn’t too difficult, Google for the machine name and BIOS, download a shell script encoded firmware image and GPG signature, then run the script on the system in question.

One problem is that the Dell GPG key isn’t signed by anyone. How hard would it be to get a few well connected people in the Linux community to sign the key used for signing Linux scripts for updating the BIOS? I would be surprised if Dell doesn’t employ a few people who are well connected in the Linux community, they should just ask all employees to sign such GPG keys! Failing that there are plenty of other options. I’d be happy to sign the Dell key if contacted by someone who can prove that they are a responsible person in Dell. If I could phone Dell corporate and ask for the engineering department and then have someone tell me the GPG fingerprint I’ll sign the key and that problem will be partially solved (my key is well connected but you need more than one signature).

The next issue is how to determine that a BIOS update works. What you really don’t want is to have a BIOS update fail and brick your system! So the Linux update process loads the image into something (special firmware RAM maybe) and then reboots the system and the reboot then does a critical part of the update. If the reboot doesn’t work then you end up with the old version of the BIOS. This is overall a good thing.

The PowerEdge T110 II is a workstation with an NVidia video card (I tried an ATI card but that wouldn’t boot for unknown reasons). The Nouveau driver has some issues. One thing I have done to work around some Nouveau issues is to create a file “~/.config/plasma-workspace/env/nouveau-broken.sh” (for KDE sessions) with the following contents:

export LIBGL_ALWAYS_SOFTWARE=1

I previously wrote about using this just for Kmail to stop it crashing [1]. But after doing that I still had other problems with video and disabling all GL on the NVidia card was necessary.

The latest problem I’ve had is that even when using that configuration things don’t go well. When I run the “reboot” command I end up with a kernel message about the GPU not responding and then it doesn’t reboot. That means that the BIOS update doesn’t apply, a hard reboot signals to the system that the new BIOS wasn’t good and I end up with the old BIOS again. I discovered that disabling sddm (the latest xdm program in Debian) from starting on boot meant that a reboot command would work. Then I ran the BIOS update script and it’s reboot command worked and gave a successful BIOS update.

So I’ve gone from a 2013 BIOS to a 2018 BIOS! The update list says that some CVEs have been addressed, but the spectre-meltdown-checker doesn’t report any fewer vulnerabilities.

Google AdsenseDirector of Global Partnerships Solutions

The GNI Digital Growth Program helps small and mid-sized publishers around the world grow their businesses online.

Worse Than FailureConfiguration Errors

Automation and tooling, especially around continuous integration and continuous deployment is standard on applications, large and small.

Paramdeep Singh Jubbal works on a larger one, with a larger team, and all the management overhead such a team brings. It needs to interact with a REST API, and as you might expect, the URL for that API is different in production and test environments. This is all handled by the CI pipeline, so long as you remember to properly configure which URLs map to which environments.

Paramdeep mistyped one of the URLs when configuring a build for a new environment. The application passed all the CI checks, and when it was released to the new environment, it crashed and burned. Their error handling system detected the failure, automatically filed a bug ticket, Paramdeep saw the mistake, fixed it, and everything was back to the way it was supposed to be within five minutes.

But a ticket had been opened. For a bug. And the team lead, Emmett, saw it. And so, in their next team meeting, Emmett launched with, “We should talk about Bug 264.”

“Ah, that’s already resolved,” Paramdeep said.

“Is it? I didn’t see a commit containing a unit test attached to the ticket,” Emmett said.

“I didn’t write one,” Paramdeep said, getting a little confused at this point. “It was just a misconfiguration.”

“Right, so there should be a test to ensure that the correct URL is configured before we deploy to the environment.” That was, in fact, the policy: any bug ticket which was closed was supposed to have an associated unit test which would protect against regressions.

“You… want a unit test which confirms that the environment URLs are configured correctly?”

“Yes,” Emmett said. “There should never be a case where the application connects to an incorrect URL.”

“But it gets that URL from the configuration.”

“And it should check that configuration is correct. Honestly,” Emmett said, “I know I’m not the most technical person on this team, but that just sounds like common sense to me.”

“It’s just…” Paramdeep considered how to phrase this. “How does the unit test tell which are the correct or incorrect URLs?”

Emmett huffed. “I don’t understand why I’m explaining this to a developer. But the URLs have a pattern. URLs which match the pattern are valid, and they pass the test. URLs which don’t fail the test, and we should fallback to a known-good URL for that environment.”

“And… ah… how do we know what the known-good URL is?”

“From the configuration!”

“So you want me to write a unit test which checks the configuration to see if the URLs are valid, and if they’re not, it uses a valid URL from the configuration?”

“Yes!” Emmett said, fury finally boiling over. “Why is that so hard?”

Paramdeep thought the question explained why it was so hard, but after another moment’s thought, there was an even better reason that Emmett might actually understand.

“Because the configuration files are available during the release step, and the unit tests are run after the build step, which is before the release step.”

Emmett blinked and considered how their CI pipeline worked. “Right, fair enough, we’ll put a pin in that then, and come back to it in a future meeting.”

“Well,” Paramdeep thought, “that didn’t accomplish anything, but it was a good waste of 15 minutes.”

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.1: New and Exciting Logging Package

Very thrilled to announce a new package RcppSpdlog which is now on CRAN in its first release 0.0.1. We had tweeted once about the earliest version which had already caught the eyes of Gabi upstream.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

I had meant to package this for a few years now but didn’t find an (easy, elegant) way to completely lift stdout / stderr and related uses which R wants us to remove for smoother operations from R itself including synchronized input/output. It was only a few weeks ago that I realized I should subclass a logger (or, more concretely, a sink for a logger) which could then use R input/output. With the right idea the implementaton was easy, and [Gabi]((https://github.com/gabime) was most helpful in making sure R CMD check would not see one or two remaining C++ i/o operations (which we currently do by not activating a default logger, and substituing REprintf() in one call). So this is now clean and sween and a simple use is included in an example in the package we can show here too (in slightly shorter form minus the documentation header):

The NEWS entry for the first release follows.

Changes in RcppSpdlog version 0.0.1 (2020-09-08)

  • Initial release with added R/Rcpp logging sink example

The only sour grapes, if any, are over the CRAN processing. This was originally uploaded three weeks ago. As a new package, it got extra attention and some truly idiosyncratic attention to two details that were already supplied in the first uploaded version. Yet it needed two rounds of going back and forth for really no great net gain, yet wasting a week each time. I am not all that impressed by this, and not particularly pleased either, but I presume is the “tax” we all pay in order to enjoy the unsurpassed richness of the CRAN repository system which continues to work just flawlessly.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianSteve Kemp: Implementing a FORTH-like language ..

Four years ago somebody posted a comment-thread describing how you could start writing a little reverse-polish calculator, in C, and slowly improve it until you had written a minimal FORTH-like system:

At the time I read that comment I'd just hacked up a simple FORTH REPL of my own, in Perl, and I said "thanks for posting". I was recently reminded of this discussion, and decided to work through the process.

Using only minimal outside resources the recipe worked as expected!

The end-result is I have a working FORTH-lite, or FORTH-like, interpreter written in around 2000 lines of golang! Features include:

  • Reverse-Polish mathematical operations.
  • Comments between ( and ) are ignored, as expected.
    • Single-line comments \ to the end of the line are also supported.
  • Support for floating-point numbers (anything that will fit inside a float64).
  • Support for printing the top-most stack element (., or print).
  • Support for outputting ASCII characters (emit).
  • Support for outputting strings (." Hello, World ").
  • Support for basic stack operations (drop, dup, over, swap)
  • Support for loops, via do/loop.
  • Support for conditional-execution, via if, else, and then.
  • Load any files specified on the command-line
    • If no arguments are included run the REPL
  • A standard library is loaded, from the present directory, if it is present.

To give a flavour here we define a word called star which just outputs a single start-character:

: star 42 emit ;

Now we can call that (NOTE: We didn't add a newline here, so the REPL prompt follows it, that's expected):

> star
*>

To make it more useful we define the word "stars" which shows N stars:

> : stars dup 0 > if 0 do star loop else drop then ;
> 0 stars
> 1 stars
*> 2 stars
**> 10 stars
**********>

This example uses both if to test that the parameter on the stack was greater than zero, as well as do/loop to handle the repetition.

Finally we use that to draw a box:

> : squares 0 do over stars cr loop ;
> 4 squares
****
****
****
****

> 10 squares
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********

For fun we allow decompiling the words too:

> #words 0 do dup dump loop
..
Word 'square'
 0: dup
 1: *
Word 'cube'
 0: dup
 1: square
 2: *
Word '1+'
 0: store 1.000000
 2: +
Word 'test_hot'
  0: store 0.000000
  2: >
  3: if
  4: [cond-jmp 7.000000]
  6: hot
  7: then
..

Anyway if that is at all interesting feel free to take a peak. There's a bit of hackery there to avoid the use of return-stacks, etc. Compared to gforth this is actually more featureful in some areas:

  • I allow you to use conditionals in the REPL - outside a word-definition.
  • I allow you to use loops in the REPL - outside a word-definition.

Find the code here:

Krebs on SecurityTwo Russians Charged in $17M Cryptocurrency Phishing Spree

U.S. authorities today announced criminal charges and financial sanctions against two Russian men accused of stealing nearly $17 million worth of virtual currencies in a series of phishing attacks throughout 2017 and 2018 that spoofed websites for some of the most popular cryptocurrency exchanges.


The Justice Department unsealed indictments against Russian nationals Danil Potekhin and Dmitirii Karasavidi, alleging the duo was responsible for a sophisticated phishing and money laundering campaign that resulted in the theft of $16.8 million in cryptocurrencies and fiat money from victims.

Separately, the U.S. Treasury Department announced economic sanctions against Potekhin and Karasavidi, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

According to the indictments, the two men set up fake websites that spoofed login pages for the currency exchanges Binance, Gemini and Poloniex. Armed with stolen login credentials, the men allegedly stole more than $10 million from 142 Binance victims, $5.24 million from 158 Poloniex users, and $1.17 million from 42 Gemini customers.

Prosecutors say the men then laundered the stolen funds through an array of intermediary cryptocurrency accounts — including compromised and fictitiously created accounts — on the targeted cryptocurrency exchange platforms. In addition, the two are alleged to have artificially inflated the value of their ill-gotten gains by engaging in cryptocurrency price manipulation using some of the stolen funds.

For example, investigators alleged Potekhin and Karasavidi used compromised Poloniex accounts to place orders to purchase large volumes of “GAS,” the digital currency token used to pay the cost of executing transactions on the NEO blockchain — China’s first open source blockchain platform.

“Using digital crurency in one victim Poloniex account, they placed an order to purchase approximately 8,000 GAS, thereby immediately increasing the market price of GAS from approximately $18 to $2,400,” the indictment explains.

Potekhin and others then converted the artificially inflated GAS in their own fictitious Poloniex accounts into other cryptocurrencies, including Ethereum (ETH) and Bitcoin (BTC). From the complaint:

“Before the Eight Fictitious Poloniex Accounts were frozen, POTEKHIN and others transferred approximately 759 ETH to nine digital currency addresses. Through a sophisticated and layered manner, the ETH from these nine digital currency addresses was sent through multiple intermediary accounts, before ultimately being deposited into a Bitfinex account controlled by Karasavidi.”

The Treasury’s action today lists several of the cryptocurrency accounts thought to have been used by the defendants. Searching on some of those accounts at various cryptocurrency transaction tracking sites points to a number of phishing victims.

“I would like to blow your bitch ass away, if you even had the balls to show yourself,” exclaimed one victim, posting in a comment on the Etherscan lookup service.

One victim said he contemplated suicide after being robbed of his ETH holdings in a 2017 phishing attack. Another said he’d been relieved of funds needed to pay for his 3-year-old daughter’s medical treatment.

“You and your team will leave a trail and will be found,” wrote one victim, using the handle ‘Illfindyou.’ “You’ll only be able to hide behind the facade for a short while. Go steal from whales you piece of shit.”

There is potentially some good news for victims of these phishing attacks. According to the Treasury Department, millions of dollars in virtual currency and U.S. dollars traced to Karasavidi’s account was seized in a forfeiture action by the United States Secret Service.

Whether any of those funds can be returned to victims of this phishing spree remains to be seen. And assuming that does happen, it could take years. In February 2020, KrebsOnSecurity wrote about being contacted by an Internal Revenue Service investigator seeking to return funds seized seven years earlier as part of the governments 2013 seizure of Liberty Reserve, a virtual currency service that acted as a $6 billion hub for the cybercrime world.

Today’s action is the latest indication that the Treasury Department is increasingly willing to use its authority to restrict the financial resources tied to various cybercrime activities. Earlier this month, the agency’s Office of Foreign Asset Control (OFAC) added three Russian nationals and a host of cryptocurrency addresses to its sanctions lists in a case involving efforts by Russian online troll farms to influence the 2018 mid-term elections.

In June, OFAC took action against six Nigerian nationals suspected of stealing $6 million from U.S. businesses and individuals through Business Email Compromise fraud and romance scams.

And in 2019, OFAC sanctioned 17 members allegedly associated with “Evil Corp.,” an Eastern European cybercrime syndicate that has stolen more than $100 million from small businesses via malicious software over the past decade.

A copy of the indictments against Potekhin and Karasavidi is available here (PDF).

Kevin RuddSky UK: Green Recovery, Tony Abbott and Donald Trump

E&OE TRANSCRIPT
TV INTERVIEW
SKY NEWS UK
16 SEPTEMBER 2020

Topics: Project Syndicate Conference; Climate Change; Tony Abbott; Trade; US Election

Kay Burley
The world’s green recovery from the pandemic is the centre of a virtual conference which will feature speakers from across the world, including former Prime Minister Gordon Ryan, also a former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd, will be speaking at the event and he is with us now. Hello to Mr Rudd, it’s a pleasure to have you on the program this morning. Tell us a bit more about this conference. What are you hoping to achieve?

Kevin Rudd
Well, the purpose of the conference is two-fold. There is a massive global investment program underway at present to bring about economic recovery, given the impact of the COVID-19 crisis. And secondly, therefore, we either engineer a green economic recovery, or we lose this massive opportunity. So, therefore, these opportunities do not come along often to radically change the dial. And we’re talking about investments in renewable energy, we’re talking about decommissioning coal-fired stations, and we’re talking about also investment in the next generation of R&D so that we can, in fact, bring global greenhouse gas emissions down to the extent necessary to keep temperature increases this century within 1.5 degrees centigrade.

Kay Burley
It’s difficult though isn’t it with the global pandemic Covid infiltrating almost everyone’s lives around the globe. How are you going to get people focused on climate change?

Kevin Rudd
It’s called — starts with L ends with P — and it’s called leadership. Look, when we went through the Global Financial Crisis of a decade or so ago, many of us, President Obama, myself and others, we elected to bring about a green recovery then as well. In our case, we legislated for renewable energies in Australia to go from a then base of 4% of total electricity supply to 20% by 2020. Everyone said you can’t do that in the midst of a global financial crisis, all too difficult. But guess what, a decade later, we now have 21% renewables in Australia. We did the same as solar subsidies for people’s houses and on their roofs, and for the installation of people’s homes to bring down electricity demand. These are the practical things which can be done at scale when you’re seeking to invest to create new levels of economic activity, but at the same time doing so in a manner which keeps our greenhouse gas emissions heading south rather than north.

Kay Burley
Of course, you talk about leadership, the leader of the free world says that the planet is getting cooler.

Kevin Rudd
Yeah well President Trump on this question is stark raving mad. I realise that may not be regarded as your standard diplomatic reference to the leader of the free world but on the climate change question, he has abandoned the science all together. And mainstream Republicans and certainly mainstream Democrats fundamentally disagree with him. If there is a change in US administration, I think you will see climate change action move to the absolute centre of the domestic priorities of a Biden administration. And importantly, the international priorities. You in the United Kingdom will be hosting before too much longer the Conference of the Parties No.26, which will be critical in terms of increasing the ambition of the contracting parties to the Paris Agreement on climate change action, to begin to go down the road necessary to keep those temperature increases within 1.5 degrees centigrade. American leadership will be important. European leadership has been positive so far, but we’ll need to see the United Kingdom put its best foot forward as well given that you are the host country.

Kay Burley
You say that but of course Tony Abbott, who succeeded you as Prime Minister of Australia, is also a climate change denier.

Kevin Rudd
The climate change denialists seem to find their way into a range of conservative parties around the world regrettably. Not always the case — if you look in continental Europe, that’s basically a bipartisan consensus on climate change action — but certainly, Mr Abbott is notorious in this country, Australia, for describing climate change as, quote, absolute crap, and has been a climate change denialist for the better part of a decade and a half. And so if I was the United Kingdom, the government of the United Kingdom, would certainly not certainly not be letting Mr Abbott anywhere near a policy matter which has to do with the future of greenhouse gas emissions, or for that matter the highly contentious policy question of carbon tariffs which will be considered by the Europeans before much longer: tariffs imposed against those countries which are not lifting their weight globally to take their own national action on climate change.

Kay Burley
Very good at trade though isn’t he? He did a great deal with Japan, didn’t he when he was Prime Minister?

Kevin Rudd
Well, Mr Abbott is good at self-promotion on these questions. The reality is in the free trade agreements that we have agreed in recent years with China and with Japan, and with the Republic of Korea. These things are negotiations which spread over many years by many Australian governments. For example, the one we did with China took 10 years, begun by my predecessor, continued under me and concluded finally under the Abbott government. So I think for anyone to beat their chest and say that they are uniquely responsible as Prime Minister to bring about a particular trade agreement that belies the facts. And I think professional trade negotiators would have that view as well.

Kay Burley
You don’t like him very much then by the sounds of it.

Kevin Rudd
Well, I’m simply being objective. You asked me. I mean, I came to talk about climate change and you’ve asked me questions about Mr Abbott, and I’m simply being frank in my response. He’s a climate change denier. And on the trade policy front, ask your average trade negotiator who was material and bringing about these outcomes; perhaps the Australian trade ministers at the time, but also spread over multiple governments. These are complex negotiations. And they take a lot of effort to finally reach the point of agreement. So at a personal level, of course, Mr Abbott is a former Prime Minister of Australia, I wish him well, but you’ve asked me two direct questions and I’ve tried to give you a frank answer.

Kay Burley
And we’d like that from our politicians. We don’t always see that from politicians in the United Kingdom, I have to say, especially on this show. Let me ask you a final question before I let you go, if I may, please former prime minister. Why are you supporting Joe Biden? Is it all about climate change?

Kevin Rudd
Well, there are two reasons why on balance — I mean, I head an independent American think tank and I’m normally based in New York, so on questions of political alignment we are independent and we will work with any US administration. If you’re asking my personal view as a former Prime Minister of Australia and someone who’s spent 10 or 15 years working on climate change policy at home and abroad, then it is critical for the planet’s future that America return to the global leadership table. It’s the world’s second-largest emitter. Unless the Americans and the Chinese are able to bring about genuine global progress on bringing down their separate national emissions, then what we see in California, what we saw in Sydney in January, what you’ll see in other parts of the world in terms of extreme weather events, more intense drought, more intense cyclonic activity, et cetera, will simply continue. That’s a principal reason. But I think there’s a broader reason as well. It’s that we need to have America respected in the world again. America is a great country, enormous strengths, vibrant democracies, a massive economy, technological prowess, but we need to see also American global leadership which can bring together as friends and allies in the world to do with multiple global challenges that we face today, including the rise of China.

Kay Burley
Okay. It’s great to talk to you. Mr. Rudd. First time, I’ve interviewed, I think, and thank you for being so honest and straightforward. And hopefully we’ll see you again before too long on the program. Thank you.

Kevin Rudd
Good to be with you.

The post Sky UK: Green Recovery, Tony Abbott and Donald Trump appeared first on Kevin Rudd.

Planet DebianBastian Blank: Salsa hosted 1e6 CI jobs

Today, Salsa hosted it's 1,000,000th CI job. The price for hitting the target goes to the Cloud team. The job itself was not that interesting, but it was successful.

Planet DebianBits from Debian: Debian Local Groups at DebConf20 and beyond

There are a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

There has been a session about Debian Local Teams at Debconf20 and it generated quite a bit of constructive discussion in the live stream (recording available at https://meetings-archive.debian.net/pub/debian-meetings/2020/DebConf20/), in the session's Etherpad and in the IRC channel (#debian-localgroups). This article is an attempt at summarizing the key points that were raised during that discussion, as well as the plans for the future actions to support new or existent Debian Local Groups and the possibility of setting up a local group support team.

Pandemic situation

During a pandemic it may seem strange to discuss offline meetings, but this is a good time to be planning things for the future. At the same time, the current situation makes it more important than before to encourage local interaction.

Reasoning for local groups

Debian can seem scary for those outside. Already having a connection to Debian - especially to people directly involved in it - seems to be the way through which most contributors arrive. But if one doesn't have a connection, it is not that easy; Local Groups facilitate that by improving networking.

Local groups are incredibly important to the success of Debian since they often help with translations, making us more diverse, support, setting up local bug squashing sprints, establishing a local DebConf team along with miniDebConfs, getting sponsors for the project and much more.

Existence of a Local Groups would also facilitate access to "swag" like stickers and mugs, since people not always have the time to deal with the process of finding a supplier to actually get those made. The activity of local groups might facilitate that by organizing related logistics.

How to deal with local groups, how to define a local group

Debian gathers the information about Local Groups in its Local Groups wiki page (and subpages). Other organisations also have their own schemes, some of them featuring a map, blogs, or clear rules about what constitutes a local group. In the case of Debian there is not a predefined set of "rules", even about the group name. That is perfectly fine, we assume that certain local groups may be very small, or temporary (created around a certain time when they plan several activities, and then become silent). However, the way the groups are named and how they are listed on the wiki page sets expectations with regards to what kinds of activities they involve.

For this reason, we encourage all the Debian Local Groups to review their entries in the Debian wiki, keep it current (e.g. add a line "Status: Active (2020)), and we encourage informal groups of Debian contributors that somehow "meet", to create a new entry in the wiki page, too.

What can Debian do to support Local Groups

Having a centralized database of groups is good (if up-to-date), but not enough. We'll explore other ways of propagation and increasing visibility, like organising the logistics of printing/sending swag and facilitate access to funding for Debian-related events.

Continuation of efforts

Efforts shall continue regarding Local Groups. Regular meetings are happening every two or three weeks; interested people are encouraged to explore some other relevant DebConf20 talks (Introducing Debian Brasil, Debian Academy: Another way to share knowledge about Debian, An Experience creating a local community on a small town), websites like Debian flyers (including other printed material as cube, stickers), visit the events section of the Debian website and the Debian Locations wiki page, and participate in the IRC channel #debian-localgroups at OFTC.

Sociological ImagesOfficer Friendly’s Adventures in Wonderland

Many of us know the Officer Friendly story. He epitomizes liberal police virtues. He seeks the public’s respect and willing cooperation to follow the law, and he preserves their favor with lawful enforcement.

Norman Rockwell’s The Runaway, 1958

The Officer Friendly story also inspired contemporary reforms that seek and preserve public favor, including what most people know as Community Policing. Norman Rockwell’s iconic painting is an idealized depiction of this narrative. Officer Friendly sits in full uniform. His blue shirt contrasts sharply with the black boots, gun, and small ticket book that blend together below the lunch counter. He is a paternalistic guardian. The officer’s eyes are fixed on the boy next to him. The lunch counter operator surveying the scene seems to smirk. All of them do, in fact. And all of them are White. The original was painted from the White perspective and highlighted the harmonious relationship between the officer and the boy. But for some it may be easy to imagine a different depiction: a hostile relationship between a boy of color and an officer in the 1950s and a friendly one between a White boy and an officer now.

Desmond Devlin (Writer) and Richard Williams’s (Artist) The Militarization of the Police Department, a painting parody of Rockwell’s The Runaway, 2014

The parody of Rockwell’s painting offers us a visceral depiction of contemporary urban policing. Both pictures depict different historical eras and demonstrate how police have changed. Officer Unfriendly is anonymous, of unknown race, and presumably male. He is prepared for battle, armed with several weapons that extend beyond his imposing frame. Officer Unfriendly is outfitted in tactical military gear with “POLICE” stamped across his back. The images also differ in their depictions of the boy’s race and his relationship to the officer. Officer Unfriendly appears more punitive than paternalistic. He looms over the Black boy sitting on the adjacent stool and peers at him through a tear gas mask. The boy and White lunch counter operator back away in fright. All of the tenderness in the original have given way to hostility in this parody.

Inspired by the critical race tradition, my new project “Officer Friendly’s Adventures in Wonderland: A Counter-Story of Race Riots, Police Reform, and Liberalism” employs composite counter-storytelling to narrate the experiences of young men of color in their explosive encounters with police. Counter-stories force dominant groups to see the world through the “Other’s” (non-White person’s) eyes, thereby challenging their preconceptions. I document the evolution of police-community relations in the last eighty years, and I reflect on the interrupted career of our protagonist, Officer Friendly. He worked with the Los Angeles Police Department (LAPD) for several stints primarily between the 1940s and 1990s.

My story focuses on Los Angeles, a city renowned for its police force and riot history. This story is richly informed by ethnographic field data and is further supplemented with archival and secondary historical data. It complicates the nature of so-called race riots, highlights how Officer Friendly was repeatedly evoked in the wake of these incidents, and reveals the pressures on LAPD officials to favor increasingly unfriendly police tactics. More broadly, the story of Officer Friendly’s embattled career raises serious questions about how to achieve racial justice. This work builds on my recently published coauthored book, The Limits of Community Policing, and can shape future critical race scholarship and historical and contemporary studies of police-community relations.

Daniel Gascón is an assistant professor of sociology at the University of Massachusetts Boston. For more on his latest work, follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

LongNowTime’s Arrow Flies through 500 Years of Classical Music, Physicists Say

A new statistical study of 8,000 musical compositions suggests that there really is a difference between music and noise: time-irreversibility. From The Smithsonian:

Noise can sound the same played forwards or backward in time, but composed music sounds dramatically different in those two time directions.

Compared with systems made of millions of particles, a typical musical composition consisting of thousands of notes is relatively short. Counterintuitively, that brevity makes statistically studying most music much harder, akin to determining the precise trajectory of a massive landslide based solely on the motions of a few tumbling grains of sand. For this study, however, [Lucas Lacasa, a physicist at Queen Mary University of London] and his co-authors exploited and enhanced novel methods particularly successful at extracting patterns from small samples. By translating sequences of sounds from any given composition into a specific type of diagrams or graphs, the researchers were able to marshal the power of graph theory to calculate time irreversibility.

In a time-irreversible music piece, the sense of directionality in time may help the listener generate expectations. The most compelling compositions, then, would be those that balance between breaking those expectations and fulfilling them—a sentiment with which anyone anticipating a catchy tune’s ‘hook’ would agree.

Worse Than FailureCodeSOD: Get My Switch

You know how it is. The team is swamped, so you’ve pulled on some junior devs, given them the bare minimum of mentorship, and then turned them loose. Oh, sure, there are code reviews, but it’s like, you just glance at it, because you’re already so far behind on your own development tasks and you’re sure it’s fine.

And then months later, if you’re like Richard, the requirements have changed, and now you’ve got to revisit the junior’s TypeScript code to make some changes.

		switch (false) {
			case (this.fooIsConfigured() === false && this.barIsConfigured() === false):
				this.contactAdministratorText = 'Foo & Bar';
				break;
			case this.fooIsConfigured():
				this.contactAdministratorText = 'Bar';
				break;
			case this.barIsConfigured():
				this.contactAdministratorText = 'Foo';
				break;
		}

We’ve seen more positive versions of this pattern before, where we switch on true. We’ve even seen the false version of this switch before.

What makes this one interesting, to me, is just how dedicated it is to this inverted approach to logic: if it’s false that “foo” is false and “bar” is false, then obviously they’re all true, thus we output a message to that effect. If one of those is false, we need to figure out which one that is, and then do the opposite, because if “foo” is false, then clearly “bar” must be true, so we output that.

Richard was able to remove this code, and then politely suggest that maybe they should be more diligent in their code reviews.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJoey Hess: comically bad shipping estimates and middlemen

My inverter has unfortunately died, and I wanted to replace it with the same model. Ideally before I lose the contents of the fridge. It's a 24v inverter, which is not at all as easy to find a replacement for as a 12v inverter would be.

Somehow Walmart was the only retailer that had it available with a delivery estimate: Just 2 days.

It's the second day now, with no indication they've shipped it. I noticed the "sold and shipped by Zoro", so went and found it on that website.

So, the reality is it ships direct from China via container ship. As does every product from Zoro, which all show as 2 day delivery on Walmart's website.

I don't think this is a pandemic thing. I think it's a trying to compete with Amazon and failing thing.


My other comically bad shipping estimate this pandemic was from Amazon though. There was a run this summer on Kayaks, because social distancing is great on the water. I found a high quality inflatable kayak.

Amazon said "only 2 left in stock" and promised delivery in 1 week. One week later, it had not shipped, and they updated the delivery estimate forward 1 week. A week after that, ditto.

Eventually I bought a new model from the same manufacturer, Advanced Elements. Unfortunately, that kayak exploded the second time I inflated it, due to a manufacturing defect.

So I got in touch with Advanced Elements and they offered a replacement. I asked if, instead, they maybe still had any of the older model of kayak I had tried to order. They checked their warehouse, and found "the last one" in a corner somewhere.

No shipping estimate was provided. It arrived in 3 days.

,

Cryptogram New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Planet DebianMolly de Blanc: “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” W. Quinn

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I’m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions.

At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn’t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur.

One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook.  But also, I don’t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society.

This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive.

Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we’re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1]

[1] This is for the sake of this text. I don’t know if I actually feel that this is correct.

My goal was to make this only 500 words, so I am going to stop here.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureCodeSOD: //article title here

Menno was reading through some PHP code and was happy to see that it was thoroughly commented:

function degToRad ($value) { return $value * (pi()/180); // convert excel timestamp to php date }

Today's code is probably best explained in meme format:
expanding brain meme: write clear comments, comments recap the exact method name, write no comments, write wrong comments

As Menno summarizes: "It's a function that has the right name, does the right thing, in the right way. But I'm not sure how that comment came to be."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRussell Coker: More About the PowerEdge R710

I’ve got the R710 (mentioned in my previous post [1]) online. When testing the R710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn’t working properly. It didn’t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn’t display anything. When the display was working the KVM keyboard wouldn’t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the R710 also wouldn’t work with my VGA->HDMI device so I couldn’t get it working with my portable monitor.

Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter 8.8.8.8 for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server.

When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image.

When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5″ disks I have 1*2.5″ SSD and need the extra space to get my hand in. The R710 is designed for 6*3.5″ disks and I’m going to have trouble if I ever want to have more than 3*2.5″ SSDs. Fortunately I don’t think I’ll need more SSDs.

After booting the system I started getting alerts about a “fault” in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it’s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I’ll have to buy a replacement SSD soon.

I configured the system to use the “nosmt” kernel command line option to disable hyper-threading (which won’t provide much performance benefit but which makes certain types of security attacks much easier). I’ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn’t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn’t need a lot of cooling when run with SSDs.

Update: It’s a R710 not a T710. I mostly deal with Dell Tower servers and typed the wrong letter out of habit.

,

Krebs on SecurityDue Diligence That Money Can’t Buy

Most of us automatically put our guard up when someone we don’t know promises something too good to be true. But when the too-good-to-be-true thing starts as our idea, sometimes that instinct fails to kick in. Here’s the story of how companies searching for investors to believe in their ideas can run into trouble.

Nick is an investment banker who runs a firm that helps raise capital for its clients (Nick is not his real name, and like other investment brokers interviewed in this story spoke with KrebsOnSecurity on condition of anonymity). Nick’s company works primarily in the mergers and acquisitions space, and his job involves advising clients about which companies and investors might be a good bet.

In one recent engagement, a client of Nick’s said they’d reached out to an investor from Switzerland — The Private Office of John Bernard — whose name was included on a list of angel investors focused on technology startups.

“We ran into a group that one of my junior guys found on a list of data providers that compiled information on investors,” Nick explained. “I told them what we do and said we were working with a couple of companies that were interested in financing, and asked them to send some materials over. The guy had a British accent, claimed to have made his money in tech and in the dot-com boom, and said he’d sold a company to Geocities that was then bought by Yahoo.”

But Nick wasn’t convinced Mr. Bernard’s company was for real. Nick and his colleagues couldn’t locate the company Mr. Bernard claimed to have sold, and while Bernard said he was based in Switzerland, virtually all of his staff were all listed on LinkedIn as residing in Ukraine.

Nick told his clients about his reservations, but each nevertheless was excited that someone was finally interested enough to invest in their ideas.

“The CEO of the client firm said, ‘This is great, someone is willing to believe in our company’,” Nick said. “After one phone call, he made an offer to invest tens of millions of dollars. I advised them not to pursue it, and one of the clients agreed. The other was very gung ho.”

When companies wish to link up with investors, what follows involves a process known as “due diligence” wherein each side takes time to research the other’s finances, management, and any lurking legal liabilities or risks associated with the transaction. Typically, each party will cover their own due diligence costs, but sometimes the investor or the company that stands to benefit from the transaction will cover the associated fees for both parties.

Nick said he wasn’t surprised when Mr. Bernard’s office insisted that its due diligence fees of tens of thousands of dollars be paid up front by his client. And he noticed the website for the due diligence firm that Mr. Bernard suggested using — insideknowledge.ch — also was filled with generalities and stock photos, just like John Bernard’s private office website.

“He said we used to use big accounting firms for this but found them to be ineffective,” Nick said. “The company they wanted us to use looked like a real accounting firm, but we couldn’t find any evidence that they were real. Also, we asked to see an investment portfolio. He said he’s invested in over 30 companies, so I would expect to see a document that says, “here’s the various companies we’ve invested in.” But instead, we got two recommendation letters on letterhead saying how great these investors were.”

KrebsOnSecurity located two other investment bankers who had similar experiences with Mr. Bernard’s office.

“A number of us have been comparing notes on this guy, and he never actually delivers,” said one investment banker who asked not to be named because he did not have permission from his clients. “In each case, he agreed to invest millions with no push back, the documentation submitted from their end was shabby and unprofessional, and they seem focused on companies that will write a check for due diligence fees. After their fees are paid, the experience has been an ever increasing and inventive number of reasons why the deal can’t close, including health problems and all sorts of excuses.”

Mr. Bernard’s investment firm did not respond to multiple requests for comment. The one technology company this author could tie to Mr. Bernard was secureswissdata.com, a Swiss concern that provides encrypted email and data services. The domain was registered in 2015 by Inside Knowledge. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

SafeSwiss co-CEO and Secure Swiss Data founder David Bruno said he couldn’t imagine that Mr. Bernard would be involved in anything improper.

“I can confirm that I know John Bernard and have always found him very honourable and straight forward in my dealings with him as an investor,” Bruno said. “To be honest with you, I struggle to believe that he would, or would even need to be, involved in the activity you mentioned, and quite frankly I’ve never heard about those things.”

DUE DILIGENCE

John Bernard is named in historic WHOIS domain name registration records from 2015 as the owner of the due diligence firm insideknowledge.ch. Another “capital investment” company tied to John Bernard’s Swiss address is liftinvest.ch, which was registered in November 2017.

Curiously, in May 2018, its WHOIS ownership records switched to a new name with the same initials: one “Jonathan Bibi,” with an address in the offshore company haven of Seychelles. Likewise, Mr. Bibi was listed as a onetime owner of the domain for Mr. Bernard’s company  —the-private-office.ch — as well as johnbernard.ch.

Running a reverse WHOIS search through domaintools.com [an advertiser on this site] reveals several other interesting domains historically tied to a Jonathan Bibi from the Seychelles. Among those is acheterdubitcoin.org, a business that was blacklisted by French regulators in 2018 for promoting cryptocurrency scams.

Another Seychelles concern tied to Mr. Bibi was effectivebets.com, which in 2017 and 2018 promoted sports betting via cryptocurrencies and offered tips on picking winners.

A Google search on Jonathan Bibi from Seychelles reveals he was listed as a respondent in a lawsuit filed in 2018 by the State of Missouri, which named him as a participant in an unlicensed “binary options” investment scheme that bilked investors out of their money.

Jonathan Bibi from Seychelles also was named as the director of another binary options scheme called the GoldmanOptions scam that was ultimately shut down by regulators in the Czech Republic.

Jason Kane is an attorney with Peiffer Wolf, a litigation firm that focuses on investment fraud. Kane said companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

“These are cases where you might win but you’ll never collect any money,” Kane said. “This seems like an investment twist on those fairly simple scams we all can’t believe people fall for, but as scams go this one is pretty good. Do this a few times a year and you can make a decent living and no one is really going to come after you.”

If you liked this post, please see Part II of the investigation, Who is John Bernard?

Cory DoctorowIP

This week on my podcast, I read the first half of my latest Locus Magazine column, “IP,” the longest, most substantial column I’ve written in my 14 years on Locus‘s masthead.

IP explores the history of how we have allowed companies to control more and more of our daily lives, and has come to mean, “any law that I can invoke that allows me to control the conduct of my competitors, critics, and customers.”

It represents a major realization on my part after decades of writing, talking and thinking about this stuff. I hope you give it a listen and/or a read.

MP3

Sociological ImagesStop Trying to Make Fetch Happen: Social Theory for Shaken Routines

It is hard to keep up habits these days. As the academic year starts up with remote teaching, hybrid teaching, and rapidly-changing plans amid the pandemic, many of us are thinking about how to design new ways to connect now that our old habits are disrupted. How do you make a new routine or make up for old rituals lost? How do we make them stick and feel meaningful?

Social science shows us how these things take time, and in a world where we would all very much like a quick solution to our current social problems, it can be tough to sort out exactly what new rules and routines can do for us.

For example, The New York Times recently profiled “spiritual consultants” in the workplace – teams that are tasked with creating a more meaningful and communal experience on the job. This is part of a larger social trend of companies and other organizations implementing things like mindfulness practices and meditation because they…keep workers happy? Foster a sense of community? Maybe just keep the workers just a little more productive in unsettled times?

It is hard to talk about the motives behind these programs without getting cynical, but that snark points us to an important sociological point. Some of our most meaningful and important institutions emerge from social behavior, and it is easy to forget how hard it is to design them into place.

This example reminded me of the classic Social Construction of Reality by Berger and Luckmann, who argue that some of our strongest and most established assumptions come from habit over time. Repeated interactions become habits, habits become routines, and suddenly those routines take on a life of their own that becomes meaningful to the participants in a way that “just is.” Trust, authority, and collective solidarity fall into place when people lean on these established habits. In other words: on Wednesdays we wear pink.

The challenge with emergent social institutions is that they take time and repetition to form. You have to let them happen on their own, otherwise they don’t take on the same same sense of meaning. Designing a new ritual often invites cringe, because it skips over the part where people buy into it through their collective routines. This is the difference between saying “on Wednesdays we wear pink” and saying

“Hey team, we have a great idea that’s going to build office solidarity and really reinforce the family dynamic we’ve got going on. We’re implementing: Pink. Wednesdays.”

All of our usual routines are disrupted right now, inviting fear, sadness, anger, frustration, and disappointment. People are trying to persist with the rituals closest to them, sometimes to the extreme detriment of public health (see: weddings, rallies, and ugh). I think there’s some good sociological advice for moving through these challenges for ourselves and our communities: recognize those emotions, trust in the routines and habits that you can safely establish for yourself and others, and know that they will take a long time to feel really meaningful again, but that doesn’t mean they aren’t working for you. In other words, stop trying to make fetch happen.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Kevin RuddA Rational Fear Podcast: Climate Action

INTERVIEW AUDIO
A RATIONAL FEAR PRESENTS:
‘GREATEST MORAL PODCAST OF OUR GENERATION’
WITH DAN ILIC
RECORDED 2 SEPTEMBER 2020
PUBLISHED 14 SEPTEMBER 2020

The post A Rational Fear Podcast: Climate Action appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Nice Save

Since HTTP is fundamentally stateless, developers have found a million ways to hack state into web applications. One of my "favorites" was the ASP.NET ViewState approach.

The ViewState is essentially a dictionary, where you can store any arbitrary state values you might want to track between requests. When the server outputs HTML to send to the browser, the contents of ViewState are serialized, hashed, and base-64 encoded and dumped into an <input type="hidden"> element. When the next request comes in, the server unpacks the hidden field and deserializes the dictionary. You can store most objects in it, if you'd like. The goal of this, and all the other WebForm state stuff was to make handling web forms more like handling forms in traditional Windows applications.

It's "great". It's extra great when its default behavior is to ensure that the full state for every UI widget on the page. The incident which inpsired Remy's Law of Requirements Gathering was a case where our users wanted like 500 text boxes on a page, and we blew out our allowed request sizes due to gigundous ViewState because, at the time, we didn't know about that "helpful" feature.

Ryan N inherited some code which uses this, and shockingly, ViewState isn't the biggest WTF here:

protected void ValidateForm(object sender, EventArgs e) { bool Save = true; if (sInstructions == string.Empty) { sInstructions = string.Empty; } else { Save = false; } if (Save) {...} }

Let me just repeat part of that:

if (sInstructions == string.Empty) { sInstructions = string.Empty; }

If sInstructions is empty, set it to be empty. If sInstructions is not empty, then we set… Save to false? Reading this code, it implies if we have instructions we shouldn't save? What the heck is this sInstructions field anyway?

Well, Ryan explains: "sInstructions is a ViewState string variable, it holds error messages."

I had to spend a moment thinking "wait, why is it called 'instructions'?" But the method name is called ValidateForm, which explains it. It's because it's user instructions, as in, "please supply a valid email address". Honestly, I'm more worried about the fact that this approach is starting to make sense to me, than anything else.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Kevin RuddDoorstop: Queensland’s Covid Response

E&OE TRANSCRIPT
DOORSTOP INTERVIEW
SUNSHINE COAST
13 SEPTEMBER 2020

Topic: The Palaszczuk Government’s Covid-19 response

Journalist: Kevin, what’s your stance on the border closures at the moment? Do you think it’s a good idea to open them up, or to keep them closed?

Kevin Rudd: Anyone who’s honest will tell you these are really hard decisions and it’s often not politically popular to do the right thing. But I think Annastacia Palaszczuk, faced with really hard decisions, I think has made the right on-balance judgment, because she’s putting first priority on the health security advice for all Queenslanders. That I think is fundamental. You see, in my case, I’m a Queenslander, proudly so, but the last five years I’ve lived in New York. Let me tell you what it’s like living in a city like New York, where they threw health security to the wind. It was a complete debacle. And so many thousands of people died as a result. So I think we’ve got to be very cautious about disregarding the health security advice. So when I see the Murdoch media running a campaign to force Premier Palaszczuk to open the borders of Queensland, I get suspicious about what’s underneath all that. Here’s the bottom line: Do you want your Premier to make a decision based on political advice, or based on media bullying? Or do you want your Premier to make a decision based on what the health professionals are saying? And from my point of view — as someone who loves this state, loves the Sunshine Coast, grew up here and married to a woman who spent her life in business employing thousands of people — on balance, what Queenslanders want is for them to have their health needs put first. And you know something? It may not be all that much longer that we need to wait. But as soon as we say, ‘well, it’s time for politics to take over, it’s time for the Murdoch media to take over and dictate when elected governments should open the borders or not’, I think that’s a bad day for Queensland.

Journalist: Do you think the recession and the economic impact is something that Australia will be able to mitigate post pandemic?

Kevin Rudd: Well, the responsibility for mitigating the national recession lies in the hands of Prime Minister Morrison. I know something about what these challenges mean. I was Prime Minister during the Global Financial Crisis when the rest of the world went into recession. So he has it within his hands to devise the economic plans necessary for the nation; state governments are responsible primarily for the health of their citizens. That’s the division of labour in our commonwealth. So, it’s either you have a situation where Premiers are told that it’s in your political interests, because the Murdoch media has launched a campaign to throw open your borders, or you’re attentive to the health policy advice from the health professionals to keep all Queenslanders safe. It’s hard, it’s tough, but on balance I think she’s probably made the right call. In fact, I’m pretty sure she has.

The post Doorstop: Queensland’s Covid Response appeared first on Kevin Rudd.

MESetting Up a Dell R710 Server with DRAC6

I’ve just got a Dell R710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.

BIOS

Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).

IDRAC

IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the R710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge R710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.

SSH

The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The R710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management

deb http://linux.dell.com/repo/community/ubuntu xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0

RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

Update: It’s a R710 not a T710. I mostly deal with Dell Tower servers and typed the wrong letter out of habit.

Planet Linux AustraliaSimon Lyall: Talks from KubeCon + CloudNativeCon Europe 2020 – Part 1

Various talks I watched from their YouTube playlist.

Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling (KEDA) – Tom Kerkhove

I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.

Autoscaling at Scale: How We Manage Capacity @ Zalando – Mikkel Larsen, Zalando SE

  • These guys have their own HPA replacement for scaling. Kube-metrics-adapter .
  • Outlines some new stuff in scaling in 1.18 and 1.19.
  • They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
  • Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
  • Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork

Keynote: Observing Kubernetes Without Losing Your Mind – Vicki Cheung

  • Lots of metrics dont’t cover what you want and get hard to maintain and complex
  • Monitor core user workflows (ie just test a pod launch and stop)
  • Tiny tools
    • 1 watches for events on cluster and logs them -> elastic
    • 2 watches container events -> elastic
    • End up with one timeline for a deploy/job covering everything
    • Empowers users to do their own debugging

Autoscaling and Cost Optimization on Kubernetes: From 0 to 100 – Guy Templeton & Jiaxin Shan

  • Intro to HPA and metric types. Plus some of the newer stuff like multiple metrics
  • Vertical pod autoscaler. Good for single pod deployments. Doesn’t work will with JVM based workloads.
  • Cluster Autoscaler.
    • A few things like using prestop hooks to give pods time to shutdown
    • pod priorties for scaling.
    • –expandable-pods-priority-cutoff to not expand for low-priority jobs
    • Using the priority-expander to try and expand spots first and then fallback to more expensive node types
    • Using mixed instance policy with AWS . Lots of instance types (same CPU/RAM though) to choose from.
    • Look at poddistruptionbudget
    • Some other CA flags like scale-down-utilisation-threshold to lok at.
  • Mention of Keda
  • Best return is probably tuning HPA
  • There is also another similar talk . Note the Male Speaker talks very slow so crank up the speed.

Keynote: Building a Service Mesh From Scratch – The Pinterest Story – Derek Argueta

  • Changed to Envoy as a http proxy for incoming
  • Wrote own extension to make feature complete
  • Also another project migrating to mTLS
    • Huge amount of work for Java.
    • Lots of work to repeat for other languages
    • Looked at getting Envoy to do the work
    • Ingress LB -> Inbound Proxy -> App
  • Used j2 to build the Static config (with checking, tests, validation)
  • Rolled out to put envoy in front of other services with good TLS termination default settings
  • Extra Mesh Use Cases
    • Infrastructure-specific routing
    • SLI Monitoring
    • http cookie monitoring
  • Became a platform that people wanted to use.
  • Solving one problem first and incrementally using other things. Many groups had similar problems. “Just a node in a mesh”.

Improving the Performance of Your Kubernetes Cluster – Priya Wadhwa, Google

  • Tools – Mostly tested locally with Minikube (she is a Minikube maintainer)
  • Minikube pause – Pause the Kubernetes systems processes and leave app running, good if cluster isn’t changing.
  • Looked at some articles from Brendon Gregg
  • Ran USE Method against Minikube
  • eBPF BCC tools against Minikube
  • biosnoop – noticed lots of writes from etcd
  • KVM Flamegraph – Lots of calls from ioctl
  • Theory that etcd writes might be a big contributor
  • How to tune etcd writes ( updated –snapshot-count flag to various numbers but didn’t seem to help)
  • Noticed CPU spkies every few seconds
  • “pidstat 1 60” . Noticed kubectl command running often. Running “kubectl apply addons” regularly
  • Suspected addon manager running often
  • Could increase addon manager polltime but then addons would take a while to show up.
  • But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
  • 32% reduction in overhead from turning off addon polling
  • Also reduced coredns number to one.
  • pprof – go tool
  • kube-apiserver pprof data
  • Spending lots of times dealing with incoming requests
  • Lots of requests from kube-controller-manager and kube-scheduler around leader-election
  • But Minikube is only running one of each. No need to elect a leader!
  • Flag to turn both off –leader-elect=false
  • 18% reduction from reducing coredns to 1 and turning leader election off.
  • Back to looking at etcd overhead with pprof
  • writeFrameAsync in http calls
  • Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
  • 4% reduction in overhead

Share

,

Kevin RuddAustralian Jewish News: Rudd Responds To Critics

With the late Shimon Peres

Published in The Australian Jewish News on 12 September 2020

Since writing in these pages last month, I’ve been grateful to receive many kind messages from readers reflecting on my record in office. Other letters, including those published in the AJN, deserve a response.

First to Michael Gawenda, whose letter repeated the false assertion that I believe in some nefarious connection between Mark Leibler’s lobbying of Julia Gillard and the factional machinations that brought her to power. Challenged to provide evidence for this fantasy, Gawenda pointed to this passage in my book, The PM Years: “The meticulous work of moving Gillard from the left to the right on foreign policy has already begun in earnest more than a year before the coup”.

Hold the phone! Of course Gillard was on the move – on the US, on Israel, and even on marriage equality. On all these, unlike me, she had a socialist-left background and wanted to appeal to right-wing factional bosses. Even Gawenda must comprehend that a strategy on Gillard’s part does not equate to Leibler plotting my demise.

My complaint against Leibler is entirely different: his swaggering arrogance in insisting Labor back virtually every move by Benjamin Netanyahu, good or bad; that we remain silent over the Mossad’s theft of Australian passports to carry out an assassination, when they’d already been formally warned under Howard never to do it again; and his bad behaviour as a dinner guest at the Lodge when, having angrily disagreed with me over the passports affair, leaned over the table and said menacingly to my face: “Julia is looking very good in the public eye these days, prime minister.”

There is a difference between bad manners and active conspiracy. Gawenda knows this. If he’d bothered to read my book, rather than flipping straight to the index, he would have found on page 293 the list of those actually responsible for plotting the coup. They were: Wayne Swan, Mark Arbib, David Feeney, Don Farrell and Stephen Conroy (with Bill Ludwig, Paul Howes, Bill Shorten, Tony Sheldon and Karl Bitar in supporting roles). All Labor Party members.

So what was Leibler up to? Unsurprisingly, he was cultivating Gillard as an influential contact in the government. Isn’t that, after all, what lobbyists do? They lobby. And Leibler, by his own account and Gawenda’s obsequious reporting, was very good at it.

Finally, I am disappointed by Gawenda’s lack of contrition for not contacting me before publication. This was his basic professional duty as a journalist. If he’d bothered, I could have dispelled his conspiracy theory inside 60 seconds. But, then again, it would have ruined his yarn.

A second letter, by Leon Poddebsky, asserted I engaged in “bombastic opposition to Israel’s blockade against the importation by Hamas of military and dual-purpose materials for its genocidal war against the Jewish state”.

I draw Mr Poddebsky to my actual remarks in 2010: “When it comes to a blockade against Gaza, preventing the supply of humanitarian aid, such a blockade should be removed. We believe that the people of Gaza … should be provided with humanitarian assistance.” It is clear my remarks were limited to curbs on humanitarian aid. The AJN has apologised for publishing this misrepresentation, and I accept that apology with no hard feeling.

Regarding David Singer’s letter, who denies Netanyahu’s stalled West Bank plan constitutes “annexation”, I decline to debate him on ancient history. I only note the term “annexation” is used by the governments of Australia and Britain, the European Union, the United Nations, the Israeli press and even the AJN.

Mr Singer disputes the illegality of annexation, citing documents from 1922. I direct him to the superseding Oslo II Accord of 1995 where Israel agreed “neither side shall initiate or take any step that will change the status of the West Bank and Gaza Strip pending the outcome of permanent status negotiations”.

Further, Mr Singer challenges my assertion that Britain’s Prime Minister agrees that annexation would violate international law. I direct Mr Singer to Boris Johnson’s article for Yedioth Ahronoth in July: “Annexation would represent a violation of International law. It would also be a gift to those who want to perpetuate old stories about Israel… If it does (go ahead), the UK will not recognise any changes to the 1967 lines, except those agreed by both parties.”

Finally, on Leibler’s continuing protestations about his behaviour at dinner, I will let others who know him better pass judgment on his character. More importantly, the net impact of Leibler’s lobbying has been to undermine Australia’s once-strong bipartisan support for Israel. His cardinal error, together with others, has been to equate support for Israel with support for Netanyahu’s policies. This may be tactically smart for their Likud friends, but it’s strategically dumb given the challenges Israel will face in the decades ahead.

The post Australian Jewish News: Rudd Responds To Critics appeared first on Kevin Rudd.

Sam VargheseWhen will Michael Hayden explain why the NSA did not predict 9/11?

As America marks the 19th anniversary of the destruction of the World Trade Centre towers by terrorists, it is a good time to ask when General Michael Hayden, head of the NSA at the time of 9/11, will come forward and explain why the agency was unable to detect the chatter among those who had banded together to wreak havoc in the US.

Before I continue, let me point out that nothing of what appears below is new; it was all reported some four years ago, but mainstream media have conspicuously avoided pursuing the topic because it would probably trouble some people in power.

The tale of how Hayden came to throw out a system known as ThinThread, devised by probably the most brilliant metadata analyst, William Binney, at that time the technical director of the NSA, has been told in a searing documentary titled A Good American.

Binney devised the system which would look at links between all people around the globe and added in measures to prevent violations of Americans’ privacy. Cryptologist Ed Loomis, analyst Kirk Wiebe, software engineer Tom Drake and Diane Roark, senior staffer at the House Intelligence Committee, worked along with him.

A skunkworks project, Binney’s ThinThread system handled data as it was ingested, unlike the existing method at the NSA which was to just collect all the data and analyse it later.

But when Hayden took the top job in the mid-1990s, he wanted to spread the NSA’s work around and also boost its budget – a bit of empire building, no less. Binney was asked what he could do with a billion dollars and more for the ThinThread system but said he could not use more than US$300 million.

What followed was surprising. Though ThinThread had been demonstrated to be able to sort data fast and accurately and track patterns within it, Hayden banned its use and put in place plans to develop a system known as Trailblazer – which incidentally had to be trashed many years later as an unmitigated disaster after many billions had gone down the drain.

Trailblazer was being used when September 11 came around.

Drake points out in the film that after 9/11, they set up ThinThread again and analysed all the data flowing into the NSA in the lead-up to the attack and found clear indications of the terrorists’ plans.

At the beginning of the film, Binney’s voice is heard saying, “It was just revolting… and disgusting that we allowed it to happen,” as footage of people leaping from the burning Trade Centre is shown.

After the tragedy, ThinThread, sans its privacy protections, was used to conduct blanket surveillance on Americans. Binney and his colleagues left the NSA shortly thereafter.

Hayden is often referred to as a great American patriot by American talk-show host Bill Maher. He was offered an interview by the makers of A Good American but did not take it up. Perhaps he would like to break his silence now.

,

Sam VargheseSerena Williams, please go before people start complaining

The US Open 2020 represented the best chance for an aging Serena Williams to win that elusive 24th Grand Slam title and equal the record of Australian Margaret Court. Seeds Bianca Andreescu (6), Ashleigh Barty (1), Simona Halep (2), Kiki Bertens (7) and Elina Svitolina (5) are all not taking part.

But Williams, now 39, could not get past Victoria Azarenka in the semi-finals, losing 1-6, 6-3, 6-3.

Prior to this, Williams had lost four Grand Slam finals in pursuit of Court’s record: Andreescu defeated her at the US Open in 2019, Angelique Kerber beat her at Wimbledon in 2018, Naomi Osaka took care of her in the 2018 US Open and Halep accounted for Williams at Wimbledon in 2019. In all those finals, Williams was unable to win more than four games in any set.

Williams took a break to have a child some years ago and after returning her only final win was in the Australian Open in 2017 when she beat her sister, Venus, 6-4, 6-4.

It looks like it is time to bow out, if not gracefully, then clumsily. One, unfortunately, cannot associate grace with Williams.

But, no, Williams has already signed up to play in the French Open which has been pushed back to September 27 from its normal time of May due to the coronavirus pandemic. Only Barty has said she would not be participating.

Sportspeople are remembered for more than mere victories. One remembers the late Arthur Ashe not merely because he was the first black man to win Wimbledon, the US Open and the Australian Open, but for the way he carried himself.

He was a great ambassador for his country and the soul of professionalism. Sadly, he died at the age of 49 after contracting AIDS from a blood transfusion.

Another lesser known person who was in the Ashe mould is Larry Gomes, the West Indies cricketer who was part of the world-beating sides that never lost a Test series for 15 years from 1980 to 1995.

Gomes quit after playing 60 Tests, and the letter he sent to the Board when he did so was a wonderful piece of writing, reflecting both his humility and his professionalism.

In an era of dashing stroke-players, he was the steadying influence on the team many a time, and was thoroughly deserving of his place among the likes of the much better-known superstars like Viv Richards, Gordon Greenidge, Desmond Haynes and Clive Lloyd.

In Williams case, she has shown herself to be a poor sportsperson, far too focused on herself and unable to see the bigger picture.

She is at the other end of the spectrum compared to players like Chris Evert and Steffi Graf, both great champions, and also models of good fair-minded competitors.

It would be good for tennis if Williams leaves the scene after the French Open, no matter if she wins there or loses. There is more to a good sportsperson than mere statistics.

Cryptogram Friday Squid Blogging: Calamari vs. Squid

St. Louis Magazine answers the important question: “Is there a difference between calamari and squid?” Short answer: no.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddWPR Trend Lines: China Under Xi Jinping

E&OE TRANSCRIPT
PODCAST INTERVIEW
WORLD POLITICS REVIEW
11 SEPTEMBER 2020

Topics: China under Xi Jinping

World Politics Review: Mr. Rudd, thank you so much for joining us on Trend Lines.

Kevin Rudd: Happy to be on the program.

WPR: I want to start with a look at this bipartisan consensus that’s emerged in the United States, this reappraisal of China in the U.S., but also in Europe and increasingly in Australia. This idea that the problems of China’s trade policies and the level playing fields within the domestic market, aren’t going away. If anything, they’re getting worse. Same thing with regard to human rights and political liberalization under Xi Jinping. And that the West really needs to start preparing for a period of strategic rivalry with China, including increased friction and maybe some bumping of elbows. Are you surprised by how rapidly this new consensus has emerged and how widespread it is?

Mr. Rudd: I’m not surprised by the emergence of a form of consensus across the democracies, whether they happen to be Western or Asian democracies, or democracies elsewhere in the world. And there’s a reason for that, and that is that the principle dynamic here has been China’s changing course itself.

And secondly, quite apart from a change in management under Xi Jinping and the change in direction we’ve seen under him, China is now increasingly powerful. It doesn’t matter what matrix of power we’re looking at—economic power, trade, investment; whether it’s capital markets, whether it’s technology or whether it’s the classic determinants of international power and various forms of military leverage.

You put all those things together, we have a new guy in charge who has decided to be more assertive about China’s interests and values in the world beyond China’s borders. And secondly, a more powerful China capable of giving that effect.

So as a result of that, many countries for the first time have had this experience rub up against them. In the past, it’s only been China’s near neighbors who have had this experience. Now it’s a much broader and shared experience around the region and around the world.

So there are the structural factors at work in terms of shaping an emerging consensus on the part of the democracies, and those who believe in an open, free trading system in the world, and those who are committed to the retention of the liberal international order—that these countries are beginning to find a common cause in dealing with their collective challenges with the Middle Kingdom.

WPR: Now you mentioned the new guy in charge, obviously that’s Xi Jinping who’s been central to all of the recent shifts in China, whether it’s domestic policy or foreign policy. At the same time there’s some suggestion that as transformational as Xi is, that there’s quite a bit of continuity in terms of some of the more assertive policies of his predecessor Hu Jintao. Is it a mistake to focus so much on Xi? Does he reflect a break in China’s approach to the world, or is it continuity or more of a transition that responds to the context of perhaps waning American power as perceived by China? So is it a mistake to focus so much on Xi, as opposed to trying to understand the Chinese leadership writ large, their perception of their own national interest?

Mr. Rudd: I think it’s a danger to see policy continuity and policy change in China as some binary alternative, because with Xi Jinping, yes, there are elements of continuity, but there are also profound elements of change. So on the continuity front, yes, in the period of Hu Jintao, we began to see a greater Chinese experimental behavior in the South China Sea, a more robust approach to the assertion of China’s territorial claims there, for example.

But with Xi Jinping, this was taken to several extra degrees in intensity when we saw a full-blown island reclamation exercise, where you saw rocky atolls suddenly being transformed into sand-filled islands, and which were then militarized. So is that continuity or change? That becomes, I think, more of a definitional question.

If I was trying to sum up what is new, however, about Xi Jinping in contrast to his predecessors, it would probably be in these terms.

In politics, Chinese domestic politics, he has changed the discourse by taking it further to the left—by which I mean a greater role for the party and ideology, and the personal control of the leader, compared with what existed before.

On the economy we see a partial shift to the left with a resuscitation of state-owned enterprises and some disincentives emerging for the further and continued growth of China’s own hitherto successful private-sector entrepreneurial champions.

On nationalism we have seen a further push under Xi Jinping further to the right than his predecessors.

And in terms of degrees of international assertiveness, whether it’s over Hong Kong, the South China Sea, or over Taiwan, in relation to Japan and the territorial claims in the East China Sea, or with India, or in the big bilateral relationship with the United States as well as with other American allies—the Canadians, the Australians and the Europeans, et cetera—as well as big, new large-canvas foreign policy initiatives like the Belt and Road Initiative, what we’ve seen is an infinitely more assertive China.

So, yes, there are elements of continuity, but it would be foolish for us to underestimate the degree of change as a consequence of the agency of Xi Jinping’s leadership.

WPR: You mentioned the concentration of power in the leader’s hands, Xi Jinping’s hands. He’s clearly the most powerful Chinese leader since Mao Zedong. At the same time and especially in the immediate aftermath of the coronavirus pandemic, but over the course of the last year or so, there’s been some reporting about rumbling within the Chinese Communist Party about his leadership. What do you make of those reports? Is he a leader who’s in firm control in China? Is that something that’s always somewhat at question, given the kind of internal politics of the Chinese Communist Party? Or are there potential challenges to his leadership?

Mr. Rudd: Well, in our analysis of Chinese politics, it hasn’t really changed a lot since I was a young diplomat working in Beijing in the mid-1980s, and that is: The name of the game is opacity. With Chinese domestic politics we are staring through a glass dimly. And that’s because the nature of a Marxist-Leninist party, in particular a Leninist party, is deeply secretive about its own internal operations. So we are left to speculate.

But what we know from the external record, we know that Xi Jinping, as you indicated in your question, has become China’s most powerful leader at least since Deng and probably since Mao, against most matrices of power. He’s become what is described as the “chairman of everything.” Every single leading policy group of the politburo is now chaired by him.

And on top of that, the further instruments of power consolidation have been reflected in his utilization in the anti-corruption campaign, and now the unleashing of a Party Rectification campaign, which is designed to reinforce compliance on the part of party members to central leadership diktat. So that’s the sort of individual that we have, and that’s the journey that he has traveled in the last six to seven years, and where most of his principle opponents have either been arrested or incarcerated or have committed suicide.

So where does that leave us in terms of the prospects for any organized political opposition? Under the party constitution, Xi Jinping is up for a reelect at the 20th Party Congress, and that is his reappointment as general secretary of the party and as chairman of the Central Military Commission. Separately, he’s up for a reelect by the National People’s Congress for a further term as China’s president.

For him to go a further term would be to break all the post-Deng Xiaoping conventions built around the principles of shared or collective leadership. But Xi Jinping, in my judgment, is determined to remain China’s paramount leader through the 2020s and into the 2030s. And how would he do that? He would perhaps see himself at the next Party Congress appointed as party chairman, a position last occupied by Chairman Mao and Mao’s immediate successor, Chairman Hua Guofeng.

He would probably retain the position of president of the country, given the constitutional changes he brought about three years or so ago to remove the two-term limit for the presidency. And he would, I think, most certainly retain the chairmanship of the Central Military Commission.

These predispositions, however, of themselves combined with the emerging cult of personality around Xi, combined with fundamental disagreements on elements of economic policy and international policy, have created considerable walls of opposition to Xi Jinping within the Chinese Communist Party.

And the existence of that opposition is best proven by the fact that Xi Jinping, in August of 2020, decided to launch this new Party Rectification campaign in order to reestablish, from his perspective, proper party discipline—that is, obedience to Xi Jinping. So the $6,000 question is, Can these different sources of dissent within the Chinese Communist Party coalesce? There’s no obvious candidate to do the coalescing, just as Deng Xiaoping in the past was a candidate for coalescing different political and policy views to those of Mao Zedong, which is why Deng Xiaoping was purged at least two times in his latter stages of his career before his final return and rehabilitation.

So we can’t see a ready candidate, but the dynamics of Chinese politics tend to have been that if there is, however, a catastrophic event—an economic implosion, a major foreign policy or international policy mishap, misstep, or crisis or conflict which goes wrong—then these tend to generate their own dynamics. So Xi Jinping’s watchword between now and the 20th Party Congress to be held in October/November of 2022 will be to prevent any such crises from emerging.

WPR: It’s a natural transition to my next question, because another point of disagreement among China watchers is whether a lot of these moves that we’ve seen over the past five years, accelerating over the past five years, but this reassertion of the party centrality within or across all spheres of Chinese society, but also some of the international moves, whether they’re signs of strength in the face of what they see as a waning United States, or whether they’re signs of weakness and a sense of having to move faster than perhaps was previously anticipated? A lot of the challenges that China faces, whether it can become a rich country before it becomes an old country, some of the environmental degradation that its development has caused, none of them are going away and none of them have gone away. So what’s your view on that in terms of China’s future prospects? Do you see a continued rise? A leveling off? The danger of a failing China and everything that that might imply?

Mr. Rudd: Well, there are a bunch of questions embedded in what you’re saying, but I’d begin by saying that Americans shouldn’t talk themselves out of global leadership in the future. America remains a powerful country in economic terms, in technological terms and in military terms, and against all three measures still today more powerful than China. In the case of the military, significantly more powerful than China.

Of course, the gap begins to narrow, but this narrowing process is not overnight, it takes a long period of time, and there are a number of potential mishaps for China. An anecdote from the Chinese commentariat recently—in a debate about America’s preparedness to go into armed conflict or war with China in the South China Sea or over Taiwan, and the Chinese nationalist constituency in China basically chanting, “Bring it on.” If people chant, “USA,” in the United States, similar gatherings would probably chant, “PRC,” in Beijing.

But it was interesting what a Chinese scholar had to say as this debate unfolded, when one of the commentators said, “Well, at the end of the day, America is a paper tiger.” Of course, that was a phrase used by Mao back in the 1950s, ‘60s. The response from the Chinese scholar in China’s online discourse was, “No, America is not a paper tiger. America is a tiger with real teeth.”

So it’s important for Americans to understand that they are still in an extraordinarily powerful position in relation to both China and the rest of the world. So the question becomes one ultimately of the future leadership direction in the United States.

The second point I’d make in response to your question is that, when we seek to understand China’s international behavior, the beginning of wisdom is to understand its domestic politics, to the extent that we can. So when we are asked questions about strength or weakness, or why is China in the COVID world doubling down on its posture towards Hong Kong, the South China Sea, towards Taiwan, the East China Sea, Japan, as well as other countries like India, Canada, Australia, and some of the Europeans and the United States.

Well, I think the determination on the part of Xi Jinping’s leadership was to make it plain to all that China had not been weakened by COVID-19. Is that a sign of weakness or of strength? I think that again becomes a definitional question.

In terms of Chinese domestic politics, however, if Xi Jinping is under attack in terms of his political posture at home and the marginalization of others within the Chinese Communist Party, if he is under attack in terms of its direction on economic policy, with the slowing of growth even in the pre-COVID period, then of course from Xi Jinping’s perspective, the best way to deal with any such dissension on the home front is to become more nationalist on the foreign front. And we’ve seen evidence of that. Is that strength, or is it weakness? Again, it’s a definitional question, but it’s the reality of what we’re dealing with.

For the future, I think ultimately what Xi Jinping’s administration will be waiting for is what sort of president emerges from November of 2020. If Trump is victorious, I think Xi Jinping will privately be pretty happy about that, because he sees the Trump administration on the one hand being hard-lined towards China, but utterly chaotic in its management of America’s national China strategy—strong on some things, weak on the other, vacillating between A and B above—and ultimately a divisive figure in terms of the solidarity of American alliances around the world.

If Biden wins, I think the judgment in China will be that America could seriously get its act back together again, run a coherent hard-line national comprehensive China strategy. Secondly, do so—unlike the Trump administration—with the friends and allies around the world in full harness and incorporative harness. So it really does depend on what the United States chooses to do in terms of its own national leadership, and therefore foreign policy direction for the future.

WPR: You mentioned the question of whether the U.S. is a paper tiger or not. Clearly in terms of military assets and capabilities, it’s not, but there’s been some strategic debate in Australia over the past five years or so—and I’m thinking particularly of Hugh White and his line of analysis—the idea that regardless of what America can do, that there really won’t be, when push comes to shove, the will to engage in the kind of military conflict that would be required to respond, for instance, to Chinese aggression or attempt to reunify militarily with Taiwan, let alone something like the Senkaku islands, the territorial dispute with Japan. And the recently released Australian Defense White Paper suggests that there’s a sense that the U.S. commitment to mutual defense in the Asia-Pacific might be waning. So what are your thoughts about the future of American primacy in Asia? And you mentioned it’s a question of leadership, but are you optimistic about the public support in America for that historical project? And if not, what are the implications for Australia, for other countries in Asia and the world?

Mr. Rudd: Well, it’s been a while since I’ve been in rural Pennsylvania. So it’s hard for me to answer your question. [Laughter] I am obviously both a diplomat by training and a politician, so I kind of understand the policy world, but I also understand rural politics.

It does depend on which way the American people go. When I look at the most recent series of findings from the Pew Research Project on American attitudes to the world, what I find interesting is that contrary to many of the assumptions by the American political class, that nearly three-quarters of Americans are strongly supportive of trade and strongly supportive of variations of free trade. If you listen to the populist commentary in the United States, you would think that notions of free trade had become a bit like the anti-Christ.

So therefore I think it will be a challenge for the American political class to understand that the American public, at least reflected in these polling numbers on trade, are still fundamentally internationally engaged, and therefore are not in support of America incrementally withdrawing from global economic leadership. On the question of political leadership and America’s global national security role and its regional national security role in the Asia Pacific, again it’s a question of the proper harnessing of American resources.

I think one of the great catastrophes of the last 20 years was America’s decision to invade Iraq. You flushed up a whole lot of political capital and foreign policy capital around the world against the wall. You expended so much in blood and treasure that I think the political wash through also in America itself was a deep set of reservations in American voters about, let’s call it extreme foreign policy folly.

Remember, that enterprise was about eliminating weapons of mass destruction, which the Bush administration alleged to exist in Saddam Hussein’s bathroom locker, and they never did. So we’re partly experiencing the wash up of all of that in the American body politic, where there is a degree of, shall we say, exhaustion about unnecessary wars and about unnecessary foreign policy engagement. So the wisdom for the future will be what is necessary, as opposed to that which is marginal.

Now, I would think, on that basis, given the galvanizing force of American public sentiment—on COVID questions, on trade and investment questions, on national security questions, and on human rights questions—that the No. 1 foreign policy priority for the United States, for the period ahead, will be China. And so what I therefore see is that the centrality of future U.S. administrations, Republican and Democrat, is likely to be galvanized by a finding from the American people that, contrary to any of the urban myths, that the American people still want to see their country become more prosperous through trade. They want their country to become bigger through continued immigration—another finding in terms of Pew research. And they want their country to deal with the greatest challenge to America’s future, which is China’s rise.

So that is different from becoming deeply engaged and involved in every nuance of national security policy in the eastern reaches of either Libya or in the southern slopes of Lebanon. There has been something of a long-term Middle Eastern quagmire, and I think the wake-up call of the last several years has been for America to say that if there’s to be a second century of American global leadership, another Pax Americana for the 21st century, then it will ultimately be resolved on whether America rises to the global economic challenge, the global technology leadership challenge and the global China challenge.

And I think the American people in their own wonderfully, shall we say unorchestrated way, but subject to the bully pulpit of American presidential leadership and politics, can harness themselves for that purpose.

WPR: So much of the West’s engagement with China over the past 30 years has been this calculated gamble on what kind of power China would be when it did achieve great power status, and this idea that trade and engagement would pull China more toward being the kind of power that’s compatible with the international order, the postwar American-led international order. Do you think we have an answer to that question yet, and has the gamble of engaging with China paid off?

Mr. Rudd: The bottom line is that if you look at the history of America’s engagement strategy with China and the engagement strategy of many of America’s allies over the decades—really since the emergence of Deng in the late ‘70s and recommenced in the aftermath of Tiananmen, probably from about 1992, 1993—it has always been engagement, but hedge. In other words, there was engagement with China across all the instrumentalities of the global economy and global political and economic governance, and across the wide panoply of the multilateral system.

But there was always a conditionality and there was always a condition, which is, countries like the United States were not about to deplete their military, even after it won the Cold War against the Soviet Union. And America did not deplete its military. America’s military remained formidable. There are ebbs and flows in the debate, but the capabilities remained world class and dominant. And so in fairness to those who prosecuted that strategy of engagement, it always was engagement plus hedge.

And so what’s happened as China’s leadership direction has itself changed over the last six or seven years under Xi Jinping, is the hedge component of engagement has reared its head and said, “Well, I’m glad we did so.” Because we still, as the United States, have the world’s most powerful military. We still, as the United States, are the world’s technology leaders—including in artificial intelligence, despite all the hype coming out of Beijing. We are the world’s leaders when it comes to the core driver of the future of artificial intelligence, which is the production of computer chips, of semiconductors and other fundamental advances in computing. And you still are the world’s leaders in the military. And so therefore America is not in a hugely disadvantageous position. And on top of all the above, you still retain the global reserve currency called the U.S. dollar.

Each of these is under challenge by China. China is pursuing a highly systematic strategy to close the gap in each of these domains. But I think there is a degree of excessive pessimism when you look at America through the Washington lens, that nothing is happening. Well, a lot is. You go to Silicon Valley, you go to the Pacific Fleet, you go to the New York Stock Exchange, and you look at the seminal debates in Hong Kong at the moment, about whether Hong Kong will remain linked between the Hong Kong dollar and the U.S. dollar.

All this speaks still to the continued strength of America’s position, but it really does depend on whether you choose to husband these strengths for the future, and whether you choose to direct them in the future towards continued American global leadership of what we call the rules-based liberal international order.

WPR: Mr. Rudd, thank you so much for being so generous with your time and your insights.

Kevin Rudd: Good to be with you.

The post WPR Trend Lines: China Under Xi Jinping appeared first on Kevin Rudd.

Cryptogram The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Cryptogram Ranking National Cyber Power

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.” The rankings: 1. US, 2. China, 3. UK, 4. Russia, 5. Netherlands, 6. France, 7. Germany, 8. Canada, 9. Japan, 10. Australia, 11. Israel. More countries are in the document.

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

LongNowStunning New Universe Fly-Through Really Puts Things Into Perspective

A stunning new video lets viewers tour the universe at superluminal speed. Miguel Aragon of Johns Hopkins, Mark Subbarao of the Adler Planetarium, and Alex Szalay of Johns Hopkins reconstructed the layout of 400,000 galaxies based on information from the Sloan Digital Sky Survey (SDSS) Data Release 7:

“Vast as this slice of the universe seems, its most distant reach is to redshift 0.1, corresponding to roughly 1.3 billion light years from Earth. SDSS Data Release 9 from the Baryon Oscillation Spectroscopic Survey (BOSS), led by Berkeley Lab scientists, includes spectroscopic data for well over half a million galaxies at redshifts up to 0.8 — roughly 7 billion light years distant — and over a hundred thousand quasars to redshift 3.0 and beyond.”

Worse Than FailureError'd: This Could Break the Bank!

"Sure, free for the first six months is great, but what exactly does happen when I hit month seven?" Stuart L. wrote.

 

"In order to add an app on the App Store Connect dashboard, you need to 'Register a new bundle ID in Certificates, Identifiers & Profiles'," writes Quentin, "Open the link, you have a nice 'register undefined' and cannot type anything in the identifier input field!"

 

"I was taught to keep money amounts as pennies rather than fractional dollars, but I guess I'm an old-fashioned guy!" writes Paul F.

 

Anthony C. wrote, "I was looking for headphones on Walmart.com and well, I guess they figured I'd like to look at something else for a change?"

 

"Build an office chair using only a spork, a napkin, and a coffee stirrer? Sounds like a job for McGuyver!"

 

"Translation from Swedish, 'We assume that most people who watch Just Chatting probably also like Just Chatting.' Yes, I bet it's true!," Bill W. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Planet Linux AustraliaDavid Rowe: Playing with PAPR

The average power of a FreeDV signal is surprisingly hard to measure as the parallel carriers produce a waveform that has many peaks an troughs as the various carriers come in and out of phase with each other. Peter, VK3RV has been working on some interesting experiments to measure FreeDV power using calorimeters. His work got me thinking about FreeDV power and in particular ways to improve the Peak to Average Power Ratio (PAPR).

I’ve messed with a simple clipper for FreeDV 700C in the past, but decided to take a more scientific approach and use some simulations to measure the effect of clipping on FreeDV PAPR and BER. As usual, asking a few questions blew up into a several week long project. The usual bugs and strange, too good to be true initial results until I started to get results that felt sensible. I’ve tested some of the ideas over the air (blowing up an attenuator along the way), and learnt a lot about PAPR and related subjects like Peak Envelope Power (PEP).

The goal of this work is to explore the effect of a clipper on the average power and ultimately the BER of a received FreeDV signal, given a transmitter with a fixed peak output power.

Clipping to reduce PAPR

In normal operation we adjust our Tx drive so the peaks just trigger the ALC. This sets the average power at Ppeak – PAPR Watts, for example Pav = 100WPEP – 10dB = 10W average.

The idea of the clipper is to chop the tops off the FreeDV waveform so the PAPR is decreased. We can then increase the Tx drive, and get a higher average power. For example if PAPR is reduced from 10 to 4dB, we get Pav = 100WPEP – 4dB – 40W. That’s 4x the average power output of the 10dB PAPR case – Woohoo!

In the example below the 16 carrier waveform was clipped and the PAPR reduced from 10.5 to 4.5dB. The filtering applied after the clipper smooths out the transitions (and limits the bandwidth to something reasonable).

However it gets complicated. Clipping actually reduces the average power, as we’ve removed the high energy parts of the waveform. It also distorts the signal. Here is a scatter diagram of the signal before and after clipping:


The effect looks like additive noise. Hmmm, and what happens on multipath channels, does the modem perform the same as for AWGN with clipped signals? Another question – how much clipping should we apply?

So I set about writing a simulation (papr_test.m) and doing some experiments to increase my understanding of clippers, PAPR, and OFDM modem performance using typical FreeDV waveforms. I started out trying a few different compression methods such as different compander curves, but found that clipping plus a bandpass filter gives about the same result. So for simplicity I settled on clipping. Throughout this post many graphs are presented in terms of Eb/No – for the purpose of comparison just consider this the same thing as SNR. If the Eb/No goes up by 1dB, so does the SNR.

Here’s a plot of PAPR versus the number of carriers, showing PAPR getting worse with the number of carriers used:

Random data was used for each symbol. As the number of carriers increases, you start to get phases in carriers cancelling due to random alignment, reducing the big peaks. Behaivour with real world data may be different; if there are instances where the phases of all carriers are aligned there may be larger peaks.

To define the amount of clipping I used an estimate of the PDF and CDF:

The PDF (or histogram) shows how likely a certain level is, and the CDF shows the cumulative PDF. High level samples are quite unlikely. The CDF shows us what proportion of samples are above and below a certain level. This CDF shows us that 80% of the samples have a level of less than 4, so only 20% of the samples are above 4. So a clip level of 0.8 means the clipper hard limits at a level of 4, which would affect the top 20% of the samples. A clip value of 0.6, would mean samples with a level of 2.7 and above are clipped.

Effect of clipping on BER

Here are a bunch of curves that show the effect of clipping on and AWGN and multipath channel (roughly CCIR poor). A 16 carrier signal was used – typical of FreeDV waveforms. The clipping level and resulting PAPR is shown in the legend. I also threw in a Tx diversity curve – sending each symbol twice on double the carriers. This is the approach used on FreeDV 700C and tends to help a lot on multipath channels.

As we clip the signal more and more, the BER performance gets worse (Eb/No x-axis) – but the PAPR is reduced so we can increase the average power, which improves the BER. I’ve tried to show the combined effect on the (peak Eb/No x-axis) curves which scales each curve according to it’s PAPR requirements. This shows the peak power required for a given BER. Lower is better.




Take aways:

  1. The 0.8 and 0.6 clip levels work best on the peak Eb/No scale, ie when we combine effect of the hit on BER performance (bad) and PAPR improvement (good).
  2. There is about 4dB improvement across a range of operating points. This is pretty signficant – similar to gains we get from Tx diversity or a good FEC code.
  3. AWGN and Multipath improvements are similar – good. Sometimes you get an algorithm that works well on AWGN but falls in a heap on multipath channels, which are typically much tougher to push bits through.
  4. I also tried 8 carrier waveforms, which produced results about 1dB better, as I guess fewer carriers have a lower PAPR to start with.
  5. Non-linear techniques like clipping spread the energy in frequency.
  6. Filtering to constrain the frequency spread brings the PAPR up again. We can trade off PAPR with bandwidth: lower PAPR, more bandwidth.
  7. Non-linear technqiques will mess with QAM more. So we may hit a wall at high data rates.

Testing on a Real PA

All these simulations are great, but how do they compare with operation on a real HF radio? I designed an experiment to find out.

First, some definitions.

The same FreeDV OFDM signal is represented in different ways as it winds it’s way through the FreeDV system:

  1. Complex valued samples are used for much of the internal signal processing.
  2. Real valued samples at the interfaces, e.g. for getting samples in and out of a sound card and standard HF radio.
  3. Analog baseband signals, e.g. voltage inside your radio.
  4. Analog RF signals, e.g. at the output of your PA, and input to your receiver terminals.
  5. An electromagnetic wave.

It’s the same signal, as we can convert freely between the representations with no loss of fidelity, but it’s representation can change the way measures like PAPR work. This caused me some confusion – for example the PAPR of the real signal is about 3dB higher than the complex valued version! I’m still a bit fuzzy on this one, but have satisfied myself that the PAPR of the complex signal is the same as the PAPR of the RF signal – which is what we really care about.

Another definition that I had to (re)study was Peak Envelope Power (PEP) – which is the peak power averaged over one or more carrier cycles. This is the RF equivalent to our “peak” in PAPR. When driven by any baseband input signal, it’s the maximum RF power of the radio, averaged over one or more carrier cycles. Signals such as speech and FreeDV waveforms will have occasional peaks that hit the PEP. A baseband sine wave driving the radio would generate a RF signal that sits at the PEP power continuously.

Here is the experimental setup:

The idea is to play canned files through the radio, and measure the average Tx power. It took me several attempts before my experiment gave sensible results. A key improvement was to make the peak power of each sampled signal the same. This means I don’t have to keep messing with the audio drive levels to ensure I have the same peak power. The samples are 16 bits, so I normalised each file such that the peak was at +/- 10000.

Here is the RF power sampler:

It works pretty well on signals from my FT-817 and IC-7200, and will help prevent any more damage to RF test equipment. I used my RF sampler after my first attempt using a SMA barell attenuator resulted in it’s destruction when I accdentally put 5W into it! Suddenly it went from 30dB to 42dB attenuation. Oops.

For all the experiments I am tuned to 7.175 MHz and have the FT-817 on it’s lowest power level of 0.5W.

For my first experiment I played a 1000 Hz sine wave into the system, and measured the average power. I like to start with simple signals, something known that lets me check all the fiddly RF kit is actually working. After a few hours of messing about – I did indeed see 27dBm (0.5W) on my spec-an. So, for a signal with 0dB PAPR, we measure average power = PEP. Check.

In my next experiment, I measured the effect of ALC on TX power. With the FT-817 on it’s lowest power setting (0.5W), I increased the drive until just before the ALC bars came on: Here is the relationship I found with output power:

Bars Tx Power
0 26.7
1 26.4
2 26.7
3 27.0

So the ALC really does clamp the power at the peak value.

On to more complex FreeDV signals.

Mesuring the average power OFDM/parallel tone signals proved much harder to measure on the spec-an. The power bounces around over a period of several seconds the ODFM waveform evolves which can derail many power measurement techniques. The time constant, or measurement window is important – we want to capture the total power over a few seconds and average the value.

After several attempts and lots of head scratching I settled on the following spec-an settings:

  1. 10s sweep time so the RBW filter is averging a lot of time varying power at each point in the sweep.
  2. 100kHz span.
  3. RBW/VBW of 10 kHz so we capture all of the 1kHz wide OFDM signal in the RBW filter peak when averaging.
  4. Power averaging over 5 samples.

The two-tone signal was included to help me debug my spec-an settings, as it has a known (3dB) PAPR.

Here is a table showing the results for several test signals, all of which have the same peak power:

Sample Description PAPR Theory/Sim (dB) PAPR Mesured (dB)
sine1000 sine wave at 1000 Hz 0 0
sine_800_1200 two tones at 800 and 1200Hz 3 4
vanilla 700D test frames unclipped 7.1 7
clip0.8 700D test frames clipped at 0.8 3.4 4
ve9qrp 700D with real speech payload data 11 10.5

Click on the file name to listen to a 5 second sample of the sample. The lower PAPR (higher average power) signals sound louder – I guess our ears work on average power too! I kept the drive constant and the PEP/peak just happened to hit 26dBm. It’s not critical, as long as the drive (and hence peak level) is the same across all waveforms tested.

Note the two tone “control” is 1dB off (4dB measured on a known 3dB PAPR signal), I’m not happy about that. This suggests a spec-an set up issue or limitation on my spec-an (e.g. the way it averages power).

However the other signals line up OK to the simulated values, within about +/- 0.5dB, which suggests I’m on the right track with my simulations.

The modulated 700D test frame signals were generated by the Octave ofdm_tx.m script, which reports the PAPR of the complex signal. The same test frame repeats continuously, which makes BER measurements convenient, but is slightly unrealistic. The PAPR was lower than the ve9qrp signal which has real speech payload data. Perhaps because the more random, real world payload data leads to occasional frames where the phase of the carriers align leading to large peaks.

Another source of discrepancy is the non flat frequency filtering in the baseband audio/crystal filter path the signal has to flow through before it emerges as RF.

The zero-span spec-an setting plots power over time, and is very useful for visualing PAPR. The first plot shows the power of our 1000 Hz sine signal (yellow), and the two tone test signal (purple):

You can see how mixing just two signals modulates the power over time, the effect on PAPR, and how the average power is reduced. Next we have the ve9qrp signal (yellow), and our clip 0.8 signal (purple):

It’s clear the clipped signal has a much higher average power. Note the random way the waveform power peaks and dips, as the various carriers come into phase. Note very few high power peaks in the ve9qrp signal – in this sample we don’t have any that hits +26dBm, as they are fairly rare.

I found eye-balling the zero-span plots gave me similar values to non-zero span results in the table above, a good cross check.

Take aways:

  1. Clipping is indeed improving our measured average power, but there are some discrepencies between the measured and PAPR values estimated from theory/simulation.
  2. Using a SDR to receive the signal and measure PAPR using my own maths might be easier than fiddling with the spec-an and guessing at it’s internal algorithms.
  3. PAPR is worse for real world signals (e.g. ve9qrp) than my canned test frames due to relatively rare alignments of the carrier phases. This might only happen once every few seconds, but significantly raises the PAPR, and hurts our average power. These occasional peaks might be triggering the ALC, pushing the average power down every time they occur. As they are rare, these peaks can be clipped with no impact on perceived speech quality. This is why I like the CDF/PDF method of setting thresholds, it lets us discard rare (low probability) outliers that might be hurting our average power.

Conclusions and Further work

The simulations suggest we can improve FreeDV by 4dB using the right clipper/filter combination. Initial tests over a real PA show we can indeed reduce PAPR in line with our simulations.

This project has lead me down an interesting rabbit hole that has kept me busy for a few weeks! Just in case I haven’t had enough, some ideas for further work:

  1. Align these clipping levels and filtering to FreeDV 700D (and possibly 2020). There is existing clipper and filter code but the thresholds were set by educated guess several years for 700C.
  2. Currently each FreeDV waveform is scaled to have the same average power. This is the signal fed via the sound card to your Tx. Should the levels of each FreeDV waveform be adjusted to be the same peak value instead?
  3. Design an experiment to prove BER performance at a given SNR is improved by 4dB as suggested by these simulations. Currently all we have measured is the average power and PAPR – we haven’t actually verified the expected 4dB increase in performance (suggested by the BER simulations above) which is the real goal.
  4. Try the experiments on a SDR Tx – they tend to get results closer to theory due to no crystal filters/baseband audio filtering.
  5. Try the experiments on a 100WPEP Tx – I have ordered a dummy load to do that relatively safely.
  6. Explore the effect of ALC on FreeDV signals and why we set the signals to “just tickle” the ALC. This is something I don’t really understand, but have just assumed is good practice based on other peoples experiences with parallel tone/OFDM modems and on-air FreeDV use. I can see how ALC would compress the amplitude of the OFDM waveform – which this blog post suggests might be a good thing! Perhaps it does so in an uncontrolled manner – as the curves above show the amount of compression is pretty important. “Just tickling the ALC” guarantees us a linear PA – so we can handle any needed compression/clipping carefully in the DSP.
  7. Explore other ways of reducing PAPR.

To peel away the layers of a complex problem is very satisfying. It always takes me several goes, improvements come as the bugs fall out one by one. Writing these blog posts oftens makes me sit back and say “huh?”, as I discover things that don’t make sense when I write them up. I guess that’s the review process in action.

Links

Design for a RF Sampler I built, mine has a 46dB loss.

Peak to Average Power Ratio for OFDM – Nice discussion of PAPR for OFDM signals from DSPlog.

Worse Than FailureCodeSOD: Put a Dent in Your Logfiles

Valencia made a few contributions to a large C++ project run by Harvey. Specifically, there were some pass-by-value uses of a large data structure, and changing those to pass-by-reference fixed a number of performance problems, especially on certain compilers.

“It’s a simple typo,” Valencia thought. “Anyone could have done that.” But they kept digging…

The original code-base was indented with spaces, but Harvey just used tabs. That was a mild annoyance, but Harvey used a lot of tabs, as his code style was “nest as many blocks as deeply as possible”. In addition to loads of magic numbers that should be enums, Harvey also had a stance that “never use an int type when you can store your number as a double”.

Then, for example, what if you have a char and you want to turn the char into a string? Do you just use the std::string() constructor that accepts a char parameter? Not if you’re Harvey!

std::string ToString(char c)
{
    std::stringstream ss;
    std::string out = "";
    ss << c;
    ss >> out;
    return out;
}

What if you wanted to cache some data in memory? A map would be a good place to start. How many times do you want to access a single key while updating a cache entry? How does “four times” work for you? It works for Harvey!

void WriteCache(std::string key, std::string value)
{
    Setting setting = mvCache["cache_"+key];
    if (!setting.initialized)
    {
        setting.initialized=true;
        setting.data = "";
        mvCache.insert(std::map<std::string,Cache>::value_type("cache_"+key,setting));
        mvCache["cache_"+key]=setting;
    }
    setting.data = value;
    mvCache["cache_"+key]=setting;
}

And I don’t know exactly what they are trying to communicate with the mv prefix, but people have invented all sorts of horrible ways to abuse Hungarian notation. Fortunately, Valencia clarifies: “Harvey used an incorrect Hungarian notation prefix while they were at it.”

That’s the easy stuff. Ugly, bad code, sure, but nothing that leaves you staring, stunned into speechlessness.

Let’s say you added a lot of logging messages, and you wanted to control how many logging messages appeared. You’ve heard of “logging levels”, and that gives you an inspiration for how to solve this problem:

bool LogLess(int iMaxLevel)
{
     int verboseLevel = rand() % 1000;
     if (verboseLevel < iMaxLevel) return true;
     return false;
}

//how it's used:
if(LogLess(500))
   log.debug("I appear half of the time");

Normally, I’d point out something about how they don’t need to return true or return false when they could just return the boolean expression, but what’d be the point? They’ve created probabilistic log levels. It’s certainly one way to solve the “too many log messages” problem: just randomly throw some of them away.

Valencia gives us a happy ending:

Needless to say, this has since been rewritten… the end result builds faster, uses less memory and is several orders of magnitude faster.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Kevin RuddNew Israel Fund: Australia and the Middle East

INTERVIEW VIDEO
NIF AUSTRALIA
WITH LIAM GETREU
9 SEPTEMBER 2020

The post New Israel Fund: Australia and the Middle East appeared first on Kevin Rudd.

Rondam RamblingsThis is what the apocalypse looks like

This is a photo of our house taken at noon today:                 This is not the raw image.  I took this with an iPhone, whose auto-exposure made the image look much brighter than it actually is.  I've adjusted the brightness and color balance to match the actual appearance as much as I can.  Even so, this image doesn't do justice to the reality.  For one thing, the sky is much too blue.  The

Cryptogram How the FIN7 Cybercrime Gang Operates

The Grugq has written an excellent essay on how the Russian cybercriminal gang FIN7 operates. An excerpt:

The secret of FIN7’s success is their operational art of cyber crime. They managed their resources and operations effectively, allowing them to successfully attack and exploit hundreds of victim organizations. FIN7 was not the most elite hacker group, but they developed a number of fascinating innovations. Looking at the process triangle (people, process, technology), their technology wasn’t sophisticated, but their people management and business processes were.

Their business… is crime! And every business needs business goals, so I wrote a mock FIN7 mission statement:

Our mission is to proactively leverage existing long-term, high-impact growth strategies so that we may deliver the kind of results on the bottom line that our investors expect and deserve.

How does FIN7 actualize this vision? This is CrimeOps:

  • Repeatable business process
  • CrimeBosses manage workers, projects, data and money.
  • CrimeBosses don’t manage technical innovation. They use incremental improvement to TTP to remain effective, but no more
  • Frontline workers don’t need to innovate (because the process is repeatable)

Cryptogram Privacy Analysis of Ambient Light Sensors

Interesting privacy analysis of the Ambient Light Sensor API. And a blog post. Especially note the “Lessons Learned” section.

Cryptogram Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated, but it’s basically a man-in-the-middle attack that involves two smartphones. The first phone reads the actual smartcard, and then forwards the required information to a second phone. That second phone actually conducts the transaction on the POS terminal. That second phone is able to convince the POS terminal to conduct the transaction without requiring the normally required PIN.

From a news article:

The researchers were able to demonstrate that it is possible to exploit the vulnerability in practice, although it is a fairly complex process. They first developed an Android app and installed it on two NFC-enabled mobile phones. This allowed the two devices to read data from the credit card chip and exchange information with payment terminals. Incidentally, the researchers did not have to bypass any special security features in the Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first mobile phone is used to scan the necessary data from the credit card and transfer it to the second phone. The second phone is then used to simultaneously debit the amount at the checkout, as many cardholders do nowadays. As the app declares that the customer is the authorized user of the credit card, the vendor does not realize that the transaction is fraudulent. The crucial factor is that the app outsmarts the card’s security system. Although the amount is over the limit and requires PIN verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify.”

Abstract: EMV is the international protocol standard for smartcard payment and is used in over 9 billion cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a state-of-the-art protocol verifier. Our model is the first that supports a fine-grained analysis of all relevant security guarantees that EMV is intended to offer. We use our model to automatically identify flaws that lead to two critical attacks: one that defrauds the cardholder and another that defrauds the merchant. First, criminals can use a victim’s Visa contact-less card for high-value purchases, without knowledge of the card’s PIN. We built a proof-of-concept Android application and successfully demonstrated this attack on real-world payment terminals. Second, criminals can trick the terminal into accepting an unauthentic offline transaction, which the issuing bank should later decline, after the criminal has walked away with the goods. This attack is possible for implementations following the standard, although we did not test it on actual terminals for ethical reasons. Finally, we propose and verify improvements to the standard that prevent these attacks, as well as any other attacks that violate the considered security properties.The proposed improvements can be easily implemented in the terminals and do not affect the cards in circulation.

Kevin RuddABC: Journalists in China and Rio Tinto

E&OE TRANSCRIPT
TV INTERVIEW
ABC AFTERNOON BRIEFING
9 SEPTEMBER 2020

Topics: Mike Smith, Bill Birtles, Cheng Lei, Rio Tinto and Queensland’s health border

Patricia Karvelas
Kevin Rudd, welcome.

Kevin Rudd
Good to be on the program, Patricia.

Patricia Karvelas
Does the treatment of the ABC and the AFR China correspondents suggest Australia and China and the relationship between the two nations has entered a new, more dangerous phase now?

Kevin Rudd
I believe it’s been trending that way for quite some time. And without being in the business of, you know, apportioning responsibility for that, the bottom line is the trajectory is all negative. So this is just one further step in that direction and deeply disturbing if your fundamental interest here is press freedom, and your interest is the proper protection of foreign correspondents in an authoritarian country like China.

Patricia Karvelas
Is there any doubt in your mind that these two correspondents were caught up in politics?

Kevin Rudd
No, listen, I actually know these two guys reasonably well. They’re highly professional journalists and when I’ve been in China myself, I’ve spent time with them. And these are journalists just doing the professional duties of their task. To infer that these folks were somehow involved in some, you know, nefarious plot against the Chinese government is just laughable. Absolutely laughable. If that was the case then, you know, you would have every foreign correspondent in Beijing up on the witness stand at the moment. It’s just nonsensical.

Patricia Karvelas
Okay. But these are Australians that have been targeted. What do you read into that? Should we be particularly concerned about the fact that they are Australians — obviously we’re are concerned about that — but does that demonstrate that the relationship with Australia is particularly troubled?

Kevin Rudd
Look at the end of the day, at this stage, we don’t know what has actually driven this particular, you know, roundup of these two Australian journalists. We know that it’s directly related to the Cheng Lei case and we don’t know what underpins that. And she’s an Australian citizen, and she needs to have her rights properly protected by the Australian Government, and I’m confident that DFAT is doing everything it can on that score. But in the broader bilateral relationship, I think you’re right. When Beijing looks at the world at the moment, it sees its number one problem as being the United States and the collapse in the US-China relationship which is now in its worst state in 50 years. And secondly, in terms of its most adverse relationships abroad, it would then place Australia, and I simply base that on what I read in each day’s official commentary in the Chinese media.

Patricia Karvelas
So is this retaliation? Is that how we should see this?

Kevin Rudd
As I said, we don’t know the full details here, other than that the Chinese themselves have confirmed that it’s related to the Cheng Lei case. But what underpins that in terms of her Australian citizenship, in terms of any other matters that she was engaged in, we don’t know. But certainly the overall frame of the Australia-China relationship is a factor here, and including the accusations contained in today’s Global Times alleging that Australian intelligence officials engaged with Chinese correspondents in their homes here in Australia on the 26th of June.

Patricia Karvelas
Bill Birtles says he was questioned about the case of this detained Chinese Australian broadcaster you’ve mentioned Cheng Lei. Do you see these incidences as related? What what does that demonstrate? And how concerned are you about Miss Cheng Lei?

Kevin Rudd
Well, she’s an Australian citizen and I’ve met her as well, in fact, at a conference I was attending in Beijing, from memory, last November. And as with any Australian, and whoever they work for, and despite the fact that she worked for China’s international media arm CGTN, we’ve got a responsibility to do everything we can for her wellbeing as well. But I think if we stand back from the individual cases, there are two big facts we should focus on. One we’ve touched on, which is the spiralling nature of the Australia-China relationship and I’ve not seen anything like it in my 35-years-plus of working on Australia-China relations. But the second one is the changing nature of Chinese domestic politics. As China increasingly cracks down on all forms of dissent, both real and imagined, and as a result what we’ve seen in Chinese domestic politics is the assertion of greater and greater powers by China’s police, security and intelligence apparatus. And the traditional constraints exercise, for example, by the Chinese Foreign Ministry, not just on this case but on others, are being pushed to one side.

Patricia Karvelas
Chinese authorities have confirmed that Ms Cheng is suspected of endangering national security. What does that suggest about how she’ll be dealt with?

Kevin Rudd
Well, the Chinese domestic national security law, quite apart from the one which we focused on recently which has been legislated for Hong Kong, the Chinese domestic law is draconian, and the terms used in it are wide-reaching, and provide, you know, maximum leverage for the prosecutors to go after an individual. Regrettably, the precedence in Chinese judicial practice is that once a formal investigation has been launched, as I understand has been the case in Cheng Lei’s case, then the prospect of securing a good outcome are radically reduced. I’ve seen this in so many cases over the years. That should not prevent the Australian Government from doing everything it possibly can, but I am concerned about where this case has already reached given that she was only arrested in the middle of August.

Patricia Karvelas
Do you see her case as an example of what’s being called hostage diplomacy?

Kevin Rudd
I don’t believe so, but I obviously am constrained by what we don’t know. What I have been concerned about and have been intimately — engaged is the wrong term — but in discussions with various people about is the case of the two Canadians who have now been incarcerated in China for well over a year in what was plainly a retaliatory action by the Chinese government over the incarceration of Madam Meng by the Canadian government at the request of the United States, Madam Meng being the daughter of the owner and chief executive of Huawei, China’s Major 5G telecommunications company. So China has acted in this way already in relation to Canada. I presume that was the basis upon which the Australian government issued its own travel advisory in early July about the risks faced by Australians traveling in China. But on these circumstances concerning Cheng Lei, I’d be speculating rather than providing hard analysis, Patricia.

Patricia Karvelas
You say in 35 years of watching you haven’t seen relations this low or deteriorate quite like they have. You’ve talked about responsibility here, who is responsible? Is it ultimately China and the Chinese regime that’s making this relationship untenable?

Kevin Rudd
Well, after the last Australian election, I went and saw Mr Morrison and was very blunt with him, which is that this is a difficult relationship. It was difficult when I was in office, and it’s difficult for him. So I’m not about to pretend that it’s easy managing a relationship with an authoritarian state which is now the world’s second-largest economy and during the course of the next 10 years likely to become the largest, and whose foreign policy is becoming increasingly assertive. There’s a high degree of difficulty here. What I have been critical of in the Australian public media is the fact that every time an issue arises in the Australia-China relationship, I’ve seen a knee-jerk reaction by some in the Australian Government and various ministers to pull out the megaphone within 30 minutes and to take what is a problem, and frankly to play it into Australian domestic politics. So China is difficult to deal with, but I think the current Australian government from time to time has been less-than-skilled in its handling. I’ve got to say however the professionals in DFAT including, you know, Ambassador Graham Fletcher’s handling of this most recent case in Beijing and the Consul-General in Shanghai, has been first class and they should be congratulated for their efforts.

Patricia Karvelas
And how should the Morrison government deal with these issues? Clearly, they’ve been pretty key here too, as have the media companies involved, both the ABC our own Managing Director, but also the Australian Financial Review. How do you think the Morison government should be behaving now?

Kevin Rudd
Look on this question of Australian media access to China, which is important as a public policy objective in its own right, and broader Western media access to China, bearing in mind we’ve had something like 17 American journalists booted out in one form or another over the last year or two. What I would suggest is that the Australian Government and the media companies keep their powder dry for a bit. What China has done in my judgment is unacceptable in the treatment of these two individuals, these two Australian journalists, but there’s a structural interest here which is to try and find an opportunity for the re-establishment of an Australian media presence in China. So until we have greater clarity in terms of the direction of Chinese overall policy on these questions, I mean, my counsel, if I was in government at the time, would be to keep our powder dry. If possible, rebuild this element of the relationship. And if not take whatever actions are then necessary. But it’s far better to be cautious about your response to these questions than to automatically jump into the domestic political trenches and try and score domestic political points in Australia by beating your chest and showing how hairy-chested you are at the same time.

Patricia Karvelas
How will the fact that there are no correspondents from Australian media left in China affect how this incredibly important subject is covered?

Kevin Rudd
Well, Patricia, to state the bleeding obvious it’s not going to help. Both Birtles and the correspondent from the Australian Financial Review, Mike Smith, are first-class journalists and they write well, and so their coverage has been important in terms of the continuing awareness of Australians not just of the Australia-China relationship, but frankly, its fundamentals which is: what’s going on in China domestically? Ultimately, what we see at this end is driven by Chinese domestic politics and the Chinese domestic economy. China’s foreign policy is ultimately a product of those factors. And the more you have effective analysis of those things, the better we are in Australia and around the world in framing our own response to this phenomenally complex country which is now becoming increasingly assertive, requiring an increasingly sophisticated response from governments like Australia.

Patricia Karvelas
Just a couple of other issues. It would be odd for me not to ask is it reasonable for Queensland to continue keeping its border with New South Wales closed given how well authorities there have kept the virus under control?

Kevin Rudd
Look I think Annastacia Palaszcuk has played this pretty well from day one, and she like the other state premiers has been highly attentive to the professional advice the chief medical officer. I mean, pardon me for being old fashioned about these things, Patricia, but I always think you should listen to the experts. We do believe, at least on our side of politics, in something called objective science. And if the experts are saying ‘be cautious about this’ then so you should be. And the other thing to say is, what would I do if I was in her position right now? Well, you could have listened to the leader of the LNP in Queensland, Ms Frecklington and opened the borders to the south many, many months ago, and run all sorts of risks including people coming in from Victoria as well. Annastasia Palaszczuk resisted those demands. I think she’s been proven to be right as a result. So if I was her, and if I was in her shoes today, I think I’d be doing exactly what she’s doing which is listening to what the chief medical officer is advising her to do.

Patricia Karvelas
Just finally, I spoke to Noel Pearson a little while ago about of course this issue around Rio Tinto and that conduct around the destruction of the Juukan Gorge caves. And of course, you know, you as prime minister delivered the the apology and has I know a long interest in Indigenous affairs. What’s your assessment of the way the company has behaved here? And what is the sort of lasting legacy here that you’re concerned about?

Kevin Rudd
For Rio Tinto — which will soon be known in Australia as Rio TNT, I think — for Rio Tinto it has blown up its own reputation as anything approximating a responsible corporate citizen in Australia. What we know already from the parliamentary inquiry is that far from this being some sort of shock and surprise or accident, that in fact, senior management within Rio Tinto were already taking legal advice and PR advice about how to handle a reaction once the detonation occurred. So for the company, they should be hauled over the coals. If there’s a fining regime in place, it should be deployed. The executives responsible for this decision should no longer be executives. If I was a shareholder of Rio Tinto, this is just appalling for Indigenous people. Look, you know, something as old as 40,000 years. This is as old as the ancient caves of Altamira and Lascaux in in Europe. And to simply allow someone to walk in with a stick of jelly and bang, you’re gone? I mean, this will be devastating for the Indigenous people who in this generation are responsible for the custodianship of the land which means these ancient sites. So for our Indigenous brothers and sisters, this has been an appalling development. For the company, I think their reputation now is mud.

Patricia Karvelas
Kevin Rudd, many thanks for joining us this afternoon.

Kevin Rudd
Good to be on the program.

The post ABC: Journalists in China and Rio Tinto appeared first on Kevin Rudd.

Worse Than FailureWeb Server Installation

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram US Space Cybersecurity Directive

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Krebs on SecurityMicrosoft Patch Tuesday, Sept. 2020 Edition

Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users.

The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120.

Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server.

“That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” said Dustin Childs, of Trend Micro’s Zero Day Initiative. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon. This should be your top priority.”

Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019.

Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future.

Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates.

Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cryptogram More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Cory DoctorowMy first-ever Kickstarter: the audiobook for Attack Surface, the third Little Brother book

I have a favor to ask of you. I don’t often ask readers for stuff, but this is maybe the most important ask of my career. It’s a Kickstarter – I know, ‘another crowdfunder?’ – but it’s:

a) Really cool;

b) Potentially transformative for publishing.

c) Anti-monopolistic

Here’s the tldr: Attack Surface – AKA Little Brother 3- is coming out in 5 weeks. I retained audio rights and produced an amazing edition that Audible refuses to carry. You can pre-order the audiobook, ebook (and previous volumes), DRM- and EULA-free.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

That’s the summary, but the details matter. First: the book itself. ATTACK SURFACE is a standalone Little Brother book about Masha, the young woman from the start and end of the other two books; unlike Marcus, who fights surveillance tech, Masha builds it.

Attack Surface is the story of how Masha has a long-overdue moral reckoning with the way that her work has hurt people, something she finally grapples with when she comes home to San Francisco.

Masha learns her childhood best friend is leading a BLM-style uprising – and she’s being targeted by the same cyberweapons that Masha built to hunt Iraqi insurgents and post-Soviet democracy movements.

I wrote Little Brother in 2006, it came out in 2008, and people tell me it’s “prescient” because the digital human rights issues it grapples with – high-tech authoritarianism and high-tech resistance – are so present in our current world.

But it’s not so much prescient as observant. I wrote Little Brother during the Bush administration’s vicious, relentless, tech-driven war on human rights. Little Brother was a bet that these would not get better on their own.

And it was a bet that tales of seizing the means of computation would inspire people to take up digital arms of their own. It worked. Hundreds of cryptographers, security experts, cyberlawyers, etc have told me that Little Brother started them on their paths.

ATTACK SURFACE – a technothriller about racial injustice, police brutality, high-tech turnkey totalitarianism, mass protests and mass surveillance – was written between May 2016 and Nov 2018, before the current uprisings and the tech worker walkouts.

https://twitter.com/search?q=%20(%23dailywords)%20(from%3Adoctorow)%20%22crypto%20wars%22&src=typed_query&f=live

But just as with Little Brother, the seeds of the current situation were all around us in 2016, and if Little Brother inspired a cohort of digital activists, I hope Attack Surface will give a much-needed push to a group of techies (currently) on the wrong side of history.

As I learned from Little Brother, there is something powerful about technologically rigorous thrillers about struggles for justice – stories that marry excitement, praxis and ethics. Of all my career achievements, the people I’ve reached this way matter the most.

Speaking of careers and ethics. As you probably know, I hate DRM with the heat of 10000 suns: it is a security/privacy nightmare, a monopolist’s best friend, and a gross insult to human rights. As you may also know, Audible will not carry any audiobooks unless they have DRM.

Audible is Amazon’s audiobook division, a monopolist with a total stranglehold on the audiobook market. Audiobooks currently account for almost as much revenue as hardcovers, and if you don’t sell on Audible, you sacrifice about 95% of that income.

That’s a decision I’ve made, and it means that publishers are no longer willing to pay for my audiobook rights (who can blame them?). According to my agent, living my principles this way has cost me enough to have paid off my mortgage and maybe funding my retirement.

I’ve tried a lot of tactics to get around Audible; selling through the indies (libro.fm, downpour.com, etc), through Google Play, and through my own shop (craphound.com/shop).

I appreciate the support there but it’s a tiny fraction of what I’m giving up – both in terms of dollars and reach – by refusing to lock my books (and my readers) (that’s you) to Amazon’s platform for all eternity with Audible DRM.

Which brings me to this audiobook.

Look, this is a great audiobook. I hired Amber Benson (a brilliant writer and actor who played Tara on Buffy), Skyboat Media and director Cassandra de Cuir, and Wryneck Studios, and we produced a 15h long, unabridged masterpiece.

It’s done. It’s wild. I can’t stop listening to it. It drops on Oct 13, with the print/ebook edition.

It’ll be on sale in all audiobook stores (except Audible) on the 13th,for $24.95.

But! You can get it for a mere $20 via my first Kickstarter.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

What’s more, you can pre-order the ebook – and also buy the previous ebooks and audiobooks (read by Wil Wheaton and Kirby Heyborne) – all DRM free, all free of license “agreements.”

The deal is: “You bought it, you own it, don’t violate copyright law and we’re good.”

And here’s the groundbreaking part. For this Kickstarter, I’m the retailer. If you pre-order the ebook from my KS, I get the 30% that would otherwise go to Jeff Bezos – and I get the 25% that is the standard ebook royalty.

This is a first-of-its-kind experiment in letting authors, agents, readers and a major publisher deal directly with one another in a transaction that completely sidesteps the monopolists who have profited so handsomely during this crisis.

Which is where you come in: if you help me pre-sell a ton of ebooks and audiobooks through this crowdfunder, it will show publishing that readers are willing to buy their ebooks and audiobooks without enriching a monopolist, even if it means an extra click or two.

So, to recap:

Attack Surface is the third Little Brother book

It aims to radicalize a generation of tech workers while entertaining its audience as a cracking, technologically rigorous thriller

The audiobook is amazing, read by the fantastic Amber Benson

If you pre-order through the Kickstarter:

You get a cheaper price than you’ll get anywhere else

You get a DRM- and EULA-free purchase

You’ll fight monopolies and support authorship

If you’ve ever enjoyed my work and wondered how you could pay me back: this is it. This is the thing. Do this, and you will help me artistically, professionally, politically, and (ofc) financially.

Thank you!

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

PS: Tell your friends!

Cory DoctorowAttack Surface Kickstarter Promo Excerpt!

This week’s podcast is a generous excerpt – 3 hours! – of the audiobook for Attack Surface, the third Little Brother book, which is available for pre-order today on my very first Kickstarter.

This Kickstarter is one of the most important moments in my professional career, an experiment to see if I can viably publish audiobooks without caving into Amazon’s monopolistic requirement that all Audible books be sold with DRM that locks it to Amazon’s corporate platform…forever. If you’ve ever wanted to thank me for this podcast or my other work, there has never been a better way than to order the audiobook (or ebook) (or both!).

Attack Surface is a standalone novel, meaning you can enjoy it without reading Little Brother or its sequel, Homeland. Please give this extended preview a listen and, if you enjoy it, back the Kickstarter and (this is very important): TELL YOUR FRIENDS.

Thank you, sincerely.

MP3

Kevin RuddSMH: Scott Morrison is Yearning for a Donald Trump victory

Published in The Sydney Morning Herald on 08 September 2020

The PM will be praying for a Republican win in the US to back up his inaction on climate and the Paris Agreement.

A year out from Barack Obama’s election in 2008, John Howard made a stunning admission that he thought Americans should be praying for a Republican victory. Ideologically this was unremarkable. But the fact Howard said so publicly was because he knew just how uncomfortable an Obama victory would be for him given his refusal to withdraw our troops from Iraq.

Fast forward more than a decade, and Scott Morrison – even in the era of Donald Trump – will also be yearning desperately for a Republican victory come November. But this time it is the conservative recalcitrance on a very different issue that risks Australia being isolated on the world stage: climate change.

And as the next summer approaches, Australians will be reminded afresh of how climate change, and its impact on our country and economy, has not gone away.

Former vice-president Joe Biden has put at the centre of his campaign a historic plan to fight climate change both at home and abroad. On his first day in office, he has promised to return the US to the Paris Agreement. And he recently unveiled an unprecedented $2 trillion green investment plan, including the complete decarbonisation of the domestic electricity system by 2035.

By contrast, Morrison remains hell-bent on Australia doing its best to disrupt global momentum to tackle the climate crisis and burying our head in the sand when it comes to embracing the new economic opportunities that come with effective climate change action.

As a result, if Biden is elected this November, we will be on track for a collision course with our American ally in a number of areas.

First, Morrison remains recklessly determined on being able to carry over so-called “credits” from the overachievement of our 2020 Kyoto target to help it meet its already lacklustre 2030 target under the new Paris regime.

No other government in the world is digging their heels in like this. None. It is nothing more than an accounting trick to allow Australia to do less. Perhaps the greatest irony is that this “overachievement” was also in large part because of the mitigation actions of our government.

That aside, these carbon credits also do nothing for the atmosphere. At worst, using them beyond 2020 could be considered illegal and only opens the back door for other countries to also do less by following Morrison’s lead.

This will come to a head at the next UN climate talks in Glasgow next year. While Australia has thus far been able to dig in against objections by most of the rest of the world, a Biden victory would only strengthen the hand of the UK hosts to simply ride over the top of any further Australian intransigence. Morrison would be foolhardy to believe that Boris Johnson’s government will burn its political capital at home and abroad to defend the indefensible Australian position.

Second, unlike 114 countries around the world, Morrison remains hell-bent on ignoring the central promise of Paris: that all governments increase their 2030 targets by the time they get to Glasgow. That’s because even if all those commitments were fully implemented, it would only give the planet one-third of what is necessary to keep average temperature increases within 1.5 degrees by 2100, as the Paris Agreement requires. This is why governments agreed to increase their ambition every five years as technologies improved, costs lowered and political momentum built.

In 2014, the Liberal government explained our existing Paris target on the basis that it was the same as what the Americans were doing. In reality, the Obama administration planned to achieve the same cut of 26 to 28 per cent on 2005 emissions by 2025 – not 2030 as we pledged and sought to disguise.

So based on the logic that what America does is this government’s benchmark for its global climate change commitments, if the US is prepared to increase it’s Paris target (as it will under Biden), so too should we. Biden himself has not just committed the US to the goal of net zero emissions by 2050, but has undertaken to embed it in legislation as countries such as Britain and New Zealand have done, and rally others to do the same.

Unsurprisingly, despite the decisions of 121 countries around the world, Morrison also refuses to even identify a timeline for achieving the Paris Agreement’s long-term goal to reach net zero emissions. As the science tells us, this needs to be by 2050 to have any shot of protecting the world’s most vulnerable populations – including in the Pacific – and saving Australia from a rolling apocalypse of weather-related disasters that will wreak havoc on our economy.

For our part, the government insists that it won’t “set a target without a plan”. But governments exist to do the hard work. And politically, it goes against the myriad of support domestically for a net zero by 2050 goal, including from the peak business, industry and union groups, the top bodies for energy and agriculture (two sectors that together account for almost half of our emissions), as well as our national airline, two largest mining companies, every state and territory government, and even a majority of conservative voters.

As Tuvalu’s recent prime minister Enele Sopoaga reminded us recently, the fact that Morrison himself looked Pacific island leaders in the eye last year and promised to develop such a long-term plan – a promise he reiterated at the G20 – also shows we risk being a country that does not do what we say. For those in the Pacific, this just rubs salt into the wound of Morrison’s decision to blindly follow Trump’s lead in halting payments to the Green Climate Fund (something Biden would also reverse), requiring them to navigate a bureaucratic maze of individual aid programs as a result.

Finally, Biden has undertaken to also align trade and climate policy by imposing carbon tariffs against those countries that fail to do their fair share in global greenhouse gas reductions. The EU is in the process of embracing the same approach. So if Morrison doesn’t act, he’s going to put our entire export sector at risk of punitive tariffs because the Liberals have so consistently failed to take climate change seriously.

Under Trump, Morrison has been able to get one giant leave pass for doing nothing on climate. But under Biden, he’ll be seen as nothing more than the climate change free-loader that he is. As he will by the rest of the world. And our economy will be punished as a result.

The post SMH: Scott Morrison is Yearning for a Donald Trump victory appeared first on Kevin Rudd.

LongNowTime-Binding and The Music History Survey

Musicologist Phil Ford, co-host of the Weird Studies podcast, makes an eloquent argument for the preservation of the “Chants to Minimalism” Western Music History survey—the standard academic curriculum for musicology students, akin to the “fish, frogs, lizards, birds” evolutionary spiral taught in bio classes—in an age of exponential change and an increased emphasis on “relevance” over the remembrance of canonical works:

Perhaps paradoxically, the rate of cultural change increases in proportional measure to the increase in cultural memory. Writing and its successor media of prosthetic memory enact a contradiction: the easy preservation of cultural memory enables us to break with the past, to unbind time. At its furthest extremes, this is manifested in the familiar and dismal spectacle of fascist and communist regimes, impelled by intellectual notions permitted by the intensified time-binding of literacy, imagining utopias that will ‘wipe the slate clean’ and trying to force people to live in a world entirely divorced from the bound time of social/cultural tradition.

See, for instance, Mao Zedong’s crusade to destroy Tibetan Buddhism by the erasure of context that held the Dalai Lama’s social role in place for fourteen generations. How is the culture to find a fifteenth Dalai Lama if no child can identify the relics of a prior Dalai Lama? Ironically this speaks to larger questions of the agency of landscapes and materials, and how it isn’t just our records as we understand them that help scaffold our identity; but that we are in some sense colonial organisms inextricable from, made by, our environments — whether built or wild. As recounted in countless change-of-station stories, monarchs and other leaders, like whirlpools or sea anemones, dissolve when pulled out of the currents that support them.

That said, the current isn’t just stability but change. Both novelty and structure are required to bind time. By pushing to extremes, modernity self-undermines, imperils its own basis for existence; and those cultures that slam on the brakes and dig into conservative tradition risk self-suffocation, or being ripped asunder by the friction of collision with the moving edge of history:

Modernity is (among other things) the condition in which time-binding is threatened by its own exponential expansion, and yet where it’s not clear exactly how we are to slow its growth.  Very modern people are reflexively opposed to anything that would slow down the acceleration: for them, the essence of the human is change. Reactionaries are reflexively opposed to anything that will speed up acceleration: for them, the essence of the human is continuity. Both are right!  Each side, given the opportunity to realize its imagined utopia of change or continuity, would make a world no sensible person would be caught dead in.

Ultimately, therefore, a conservative-yet-innovative balance must be found in the embrace of both new information technologies and their use for preservation of historic repertoires. When on a rocket into space, a look back at the whole Earth is essential to remember where we come from:

The best argument for keeping Sederunt in the classroom is that it is one of the nearly-infinite forms of music that the human mind has contrived, and the memory of those forms — time-binding — is crucial not only to the craft of musicians but to our continued sense of what it is to be a human being.

This isn’t just future-shocked reactionary work but a necessary integrative practice that enables us to reach beyond:

To tell the story of who we are is to engage in the scholar’s highest mission. It is the gift that shamans give their tribe.

Worse Than FailureCodeSOD: Sleep on It

If you're fetching data from a remote source, "retry until a timeout is hit" is a pretty standard pattern. And with that in mind, this C++ code from Auburus doesn't look like much of a WTF.

bool receiveData(uint8_t** data, std::chrono::milliseconds timeToWait) { start = now(); while ((now() - start) < timeToWait) { if (/* successfully receive data */) { return true; } std::this_thread::sleep_for(100ms); } return false; }

Track the start time. While the difference between the current time and the start is less than our timeout, try and get the data. If you don't, sleep for 100ms, then retry.

This all seems pretty reasonable, at first glance. We could come up with better ways, certainly, but that code isn't quite a WTF.

This code is:

// The ONLY call to that function receiveData(&dataPtr, 100ms);

By calling this with a 100ms timeout, and because we hard-coded in a 100ms sleep, we've guaranteed that we will never retry. That may or not be intentional, and that's what really bugs me about this code. Maybe they meant to do that (because they originally retried, and found it caused other bugs?). Maybe they didn't. But they didn't document it, either in the code or as a commit comment, so we'll never know.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

LongNowStudy Group for Progress Launches with Discount for Long Now Members

Long Now Member Jason Crawford, founder of The Roots of Progress, is starting up a weekly learning group on progress with a steep discount for Long Now Members:

The Study Group for Progress is a weekly discussion + Q&A on the history, economics and philosophy of progress. Long Now members can get 50% off registration using the link below.


Each week will feature a special guest for Q&A. Confirmed speakers so far include top economists and historians such as Robert J. Gordon (Northwestern, The Rise & Fall of American Growth), Margaret Jacob (UCLA), Richard Nelson (Columbia), Patrick Collison, and Anton Howes. Readings from each author will be given out ahead of time. Participants will also receive a set of readings originally created for the online learning program Progress Studies for Young Scholars: a summary of the history of technology, including advances in materials and manufacturing, agriculture, energy, transportation, communication, and disease.


The group will meet weekly on Sundays at 4:00–6:30pm Pacific, from September 13 through December 13 (recordings available privately afterwards). See the full announcement here and register for 50% off with this link

Worse Than FailureCodeSOD: Classic WTF: Covering All Cases… And Then Some

It's Labor Day in the US, where we celebrate the labor movement and people who, y'know, do actual work. So let's flip back to an old story, which does a lot of extra work. Original -- Remy

Ben Murphy found a developer who liked to cover all of his bases ... then cover the dug-out ... then the bench. If you think this method to convert input (from 33 to 0.33) is a bit superflous, you should see data validation.

Static Function ConvertPercent(v_value As Double)
  If v_value > 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value < 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = -1 Then
    ConvertPercent = v_value / 100
  Else 
    ConvertPercent = v_value
  End If 
End Function


The original article- from 2004!- featured Alex asking for a logo. Instead, let me remind you to submit your WTF. Our stories come from our readers. If nothing else, it's a great chance to anonymously vent about work.
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cryptogram Schneier.com is Moving

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.

,

Cryptogram Friday Squid Blogging: Morning Squid

Asa ika means “morning squid” in Japanese.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: We Must Explore the Null!

"Beyond the realm of Null, past the Black Stump, lies the mythical FILE_NOT_FOUND," writes Chris A.

 

"I know, Zwift, I should have started paying you 50 years ago," Jeff wrote, "But hey, thanks for still giving leeches like me a free ride!"

 

Drake C. writes, "Must...not...click...forbidden...newsletter!"

 

"I'm having a hard time picking between these 'Exclusives'. It's a shame they're both scheduled for the same time," wrote Rutger.

 

"Wait, so is this beer zero raised to hex FF? At that price I'll take 0x02!" wrote Tony B.

 

Kevin F. writes, "Some weed killers say they're powerful. This one backs up that claim!"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.

,

Worse Than FailureCodeSOD: Learning the Hard Way

If you want millions in VC funding, mumble the words “machine learning” and “disruption” and they’ll blunder out of the woods to just throw money at your startup.

At its core, ML is really about brute-forcing a statistical model. And today’s code from Norine could have possibly been avoided by applying a little more brute force to the programmer responsible.

This particular ML environment, like many, uses Python to wrap around lower-level objects. The ease of Python coupled with the speed of native/GPU-accelerated code. It has a collection of Model datatypes, and at runtime, it needs to decide which concrete Model type it should instantiate. If you come from an OO background in pretty much any other language, you’re thinking about factory patterns and abstract classes, but that’s not terribly Pythonic. Not that this developer’s solution is Pythonic either.

def choose_model(data, env):
  ModelBase = getattr(import_module(env.modelpath), env.modelname)
  
  class Model(ModelBase):
    def __init__(self, data, env):
      if env.data_save is None:
        if env.counter == 0:
          self.data = data
        else:
          raise ValueError("data unavailable with counter > 0")
      
      else:
        with open(env.data_save, "r") as df:
          self.data = json.load(df)
      ModelBase.__init__(self, **self.data)
  
  return Model(data, env)

This is an example of metaprogramming. We use import_module to dynamically load a module at runtime- potentially smart, because modules may take some time to load, so we shouldn’t load a module we don’t know that we’re going to use. Then, with get_attr, we extract the definition of a class with whatever name is stored in env.modelname.

This is the model class we want to instantiate. But instead of actually instantiating it, we instead create a new derived class, and slap a bunch of logic and file loading into it.

Then we instantiate and return an instance of this dynamically defined derived class.

There are so many things that make me cringe. First, I hate putting file access in the constructor. That’s maybe more personal preference, but I hate constructors which can possibly throw exceptions. See also the raise ValueError, where we explicitly throw exceptions. That’s just me being picky, though, and it’s not like this constructor will ever get called from anywhere else.

More concretely bad, these kinds of dynamically defined classes can have some… unusual effects in Python. For example, in Python2 (which this is), each call to choose_model will tag the returned instance with the same type, regardless of which base class it used. Since this method might potentially be using a different base class depending on the env passed in, that’s asking for confusion. You can route around these problems, but they’re not doing that here.

But far, far more annoying is that the super-class constructor, ModelBase.__init__, isn’t called until the end.

You’ll note that our child class manipulates self.data, and while it’s not pictured here, our base model classes? They also use a property called data, but for a different purpose. So our child class inits a child class property, specifically to build a dictionary of key/value pairs, which it then passes as kwargs, or keyword arguments (the ** operator) to the base class constructor… which then overwrites the self.data our child class was using.

So why do any of that?

Norine changed the code to this simpler, more reliable version, which doesn’t need any metaprogramming or dynamically defined classes:

def choose_model(data, env):
  Model = getattr(import_module(env.modelpath), env.modelname)
  
  if env.data_save is not None:
    with open(env.data_save, "r") as df:
      data = json.load(df)
  elif env.counter != 0:
    raise ValueError('if env.counter > 0 then must use data_save parameter')

  return Model(**data)

Norine adds:

I’m thinking of holding on to the original, and showing it to interviewees like a Rorschach test. What do you see in this? The fragility of a plugin system? The perils of metaprogramming? The hollowness of an overwritten argument? Do you see someone with more cleverness than sense? Or someone intelligent but bored? Or perhaps you see, in the way the superclass init is called, TRWTF: a Python 2 library written within the last 3 years.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram 2017 Tesla Hack

Interesting story of a class break against the entire Tesla fleet.

Krebs on SecurityThe Joys of Owning an ‘OG’ Email Account

When you own a short email address at a popular email provider, you are bound to get gobs of spam, and more than a few alerts about random people trying to seize control over the account. If your account name is short and desirable enough, this kind of activity can make the account less reliable for day-to-day communications because it tends to bury emails you do want to receive. But there is also a puzzling side to all this noise: Random people tend to use your account as if it were theirs, and often for some fairly sensitive services online.

About 16 years ago — back when you actually had to be invited by an existing Google Mail user in order to open a new Gmail account — I was able to get hold of a very short email address on the service that hadn’t yet been reserved. Naming the address here would only invite more spam and account hijack attempts, but let’s just say the account name has something to do with computer hacking.

Because it’s a relatively short username, it is what’s known as an “OG” or “original gangster” account. These account names tend to be highly prized among certain communities, who busy themselves with trying to hack them for personal use or resale. Hence, the constant account takeover requests.

What is endlessly fascinating is how many people think it’s a good idea to sign up for important accounts online using my email address. Naturally, my account has been signed up involuntarily for nearly every dating and porn website there is. That is to be expected, I suppose.

But what still blows me away is the number of financial and other sensitive accounts I could access if I were of a devious mind. This particular email address has accounts that I never asked for at H&R Block, Turbotax, TaxAct, iTunes, LastPass, Dashlane, MyPCBackup, and Credit Karma, to name just a few. I’ve lost count of the number of active bank, ISP and web hosting accounts I can tap into.

I’m perpetually amazed by how many other Gmail users and people on similarly-sized webmail providers have opted to pick my account as a backup address if they should ever lose access to their inbox. Almost certainly, these users just lazily picked my account name at random when asked for a backup email — apparently without fully realizing the potential ramifications of doing so. At last check, my account is listed as the backup for more than three dozen Yahoo, Microsoft and other Gmail accounts and their associated file-sharing services.

If for some reason I ever needed to order pet food or medications online, my phantom accounts at Chewy, Coupaw and Petco have me covered. If any of my Weber grill parts ever fail, I’m set for life on that front. The Weber emails I periodically receive remind me of a piece I wrote many years ago for The Washington Post, about companies sending email from [companynamehere]@donotreply.com, without considering that someone might own that domain. Someone did, and the results were often hilarious.

It’s probably a good thing I’m not massively into computer games, because the online gaming (and gambling) profiles tied to my old Gmail account are innumerable.

For several years until recently, I was receiving the monthly statements intended for an older gentleman in India who had the bright idea of using my Gmail account to manage his substantial retirement holdings. Thankfully, after reaching out to him he finally removed my address from his profile, although he never responded to questions about how this might have happened.

On balance, I’ve learned it’s better just not to ask. On multiple occasions, I’d spend a few minutes trying to figure out if the email addresses using my Gmail as a backup were created by real people or just spam bots of some sort. And then I’d send a polite note to those that fell into the former camp, explaining why this was a bad idea and ask what motivated them to do so.

Perhaps because my Gmail account name includes a hacking term, the few responses I’ve received have been less than cheerful. Despite my including detailed instructions on how to undo what she’d done, one woman in Florida screamed in an ALL CAPS reply that I was trying to phish her and that her husband was a police officer who would soon hunt me down. Alas, I still get notifications anytime she logs into her Yahoo account.

Probably for the same reason the Florida lady assumed I was a malicious hacker, my account constantly gets requests from random people who wish to hire me to hack into someone else’s account. I never respond to those either, although I’ll admit that sometimes when I’m procrastinating over something the temptation arises.

Losing access to your inbox can open you up to a cascading nightmare of other problems. Having a backup email address tied to your inbox is a good idea, but obviously only if you also control that backup address.

More importantly, make sure you’re availing yourself of the most secure form of multi-factor authentication offered by the provider. These may range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

Maybe you’ve put off enabling multi-factor authentication for your important accounts, and if that describes you, please take a moment to visit twofactorauth.org and see whether you can harden your various accounts.

As I noted in June’s story, Turn on MFA Before Crooks Do It For You, people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Are you in possession of an OG email account? Feel free to sound off in the comments below about some of the more gonzo stuff that winds up in your inbox.

,

Cryptogram Insider Attack on the Carnegie Library

Greg Priore, the person in charge of the rare book room at the Carnegie Library, stole from it for almost two decades before getting caught.

It’s a perennial problem: trusted insiders have to be trusted.