Planet Russell


Planet DebianHolger Levsen: 20210928-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, klein aber fein

So the Debian Reunion Hamburg 2021 has been going on for not yet 48h now and it appears people are having fun, enjoying discussions between fellow Debian people and getting some stuff done as well. I guess I'll write some more about it once the event is over...

Sharing android screens...

For now I just want to share one little gem I learned about yesterday on the hallway track:

$ sudo apt install scrcpy
$ scrcpy

And voila, once again I can type on my phone with a proper keyboard and copy and paste URLs between the two devices. One can even watch videos on the big screen with it :)

Kevin RuddNikkei Asia: China should now outline how it will reduce domestic carbon emissions

Article by Kevin Rudd and Thom Woodroofe.

Kevin Rudd is a former prime minister of Australia and is the global president of the Asia Society. Thom Woodroofe is a former climate diplomat and a fellow at the Asia Society Policy Institute.

Xi Jinping’s pledge to the U.N. General Assembly last week to halt China’s construction of coal-fired power plants abroad through the Belt and Road Initiative has drawn a big line in the sand.

It is a welcome development signaling that China knows the future is paved by renewables. The key question now is when China will draw a similar line in the sand at home.

China represents around 27% of global emissions, more than the developed world combined. On current trajectories, China will also be the world’s largest historical emitter of greenhouse gases by 2050, making its actions central to whether the world can keep temperatures from rising above the Paris Agreement’s 1.5 degrees Celsius limit.

The largest infrastructure initiative in history and the jewel in the crown of Xi’s foreign policy, the BRI has funneled billions of dollars toward the construction of coal-fired power plants as far away as Eastern Europe and across Africa since its launch in 2013.

In a single sentence, Xi has wiped $50 billion of planned investment that would have resulted in more than 40 new coal plants — more than the current operating fleet in Germany — in countries including Bangladesh, Indonesia, Vietnam and South Africa, and helped avoid at least 250 million tons of carbon emissions a year.

Over their operating life span, this would have been as much as a year of China’s own emissions. In other words, this is a very big deal that will have a major important impact on the global demand for coal.

Whether Xi’s pledge will impact a similar number of Chinese coal-fired plants that are already under construction or are in the final stages of planning around the world would be an important signal to the international community that Beijing is serious. So too would be whether Chinese labor in these projects is restricted, and whether Beijing’s support for coal is replaced by genuinely green alternatives, and not high-emitting options like natural gas.

Moves to restrict foreign direct investment, as well as commercial and state-owned enterprise finance in these BRI projects, would be another. That is why the Bank of China’s announcement on Friday that it will largely halt investment in coal later this year is a welcome sign. China’s other three state-owned banks should now follow suit.

Beijing’s latest move is not entirely unexpected, confirming what China had already begun to operationalize over the last year after similar moratoriums by Japan and South Korea. Added to this was pressure from many BRI recipient countries which in recent years many had begun to eschew, and in some cases reject, Beijing’s preference for adding coal-fired power capacity over renewables.

In China’s eyes, the time was right for a major policy reset on its own terms and that was not done at the behest of the Americans. Adding urgency was the fact that massive new clean energy investments around the world driven by American finance risked unseating the political and strategic footholds Beijing had secured in many of these countries.

China also had to bring more to the table ahead of next month’s 26th U.N. Climate Change Conference of the Parties, or COP26, in Glasgow in order to avoid being painted as a villain, especially now that the easy international ride China it had under Donald Trump’s reckless climate approach was over.

Still, China has much more to do. Unlike other major emitters such as the U.S., China is yet to formally update its domestic climate targets first enshrined under the 2015 Paris Agreement.

And given that Xi’s latest announcement on BRI projects does not speak at all to China’s own efforts to reduce emissions at home, the international community will be keenly awaiting the release of China’s revised nationally determined contribution required under the Paris Agreement.

Currently only pledging to peak carbon emissions before 2030, Beijing must bring forward its plan to peak domestic emissions if China is to reach carbon neutrality by 2060. According to modeling by the Asia Society and Climate Analytics, this will need to be much closer to 2025.

Given the magnitude of Chinese emissions on a global scale, bringing forward that date by a year or two will simply not be enough and would undermine the credibility of Xi’s carbon neutrality pledge. Nor will committing to any such peak without a cap on emissions in the meantime, thus ensuring that emissions do not skyrocket between now and then.

For example, an annual Chinese cap of 10 billion tons of CO2 emissions would put China on track to soon cross the symbolically significant threshold of reducing coal for the first time ever to less than half of its domestic energy mix.

With close to half of China’s emissions — and 20% of all the world’s emissions — coming from coal, this would really change the game globally. A trajectory toward carbon neutrality by 2060 will also require China to completely remove coal from its domestic energy mix by 2040.

Until China is prepared to draw a similar line in the sand on the construction of new coal-fired power plants at home and convert the coal plants already under construction abroad to renewable alternatives, Xi’s latest announcement is unlikely to be met with the international fanfare Beijing might hope.

Article published in Nikkei Asia on 27 September 2021, available here.


The post Nikkei Asia: China should now outline how it will reduce domestic carbon emissions appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Golfing Over a Log

Indirection is an important part of programming. Wrapping even core language components in your own interfaces is sometimes justifiable, depending upon the use cases.

But like anything else, it can leave you scratching your head. Sam found this bit of indirection in a NodeJS application:

var g = {}; g.log = console.log; g.print = g.util.print; g.inspect = g.util.inspect; g.l = g.log; g.i = g.inspect; g.ll = function(val) { g.l(g.i(val)); }

The intent, clearly, is to play a little code golf. g.ll(something) will dump a detailed report about an object to the console. I mean, that's the goal, anyway. Of course, that makes the whole thing less clear, but that's not the WTF.

The rather obvious problem is that this code just doesn't work. g.util doesn't exist, so quite a few of these lines throw errors. They clearly meant to reference the Node module util, which has inspect and print methods. They just slapped a g. on the front because they clearly weren't thinking, or meant to capture it, like g.util = require('util') or similar.

This module is meant to provide a bunch of logging functionality, and it has many many more lines. The only method ever used, from this snippet, is g.l, so if not for the fact that this errors out on the third line, most of the rest of the module would probably work.

Fortunately, despite being in the code base, and despite once having been referenced by other modules in the project, this module isn't actually used anywhere. Of course, it was still sitting there, still announcing itself as a logging module, and lying in wait for some poor programmer to think they were supposed to use it.

Sam has cleaned up the code and removed this module entirely. Who knows what else lurks in there, broken and seemingly unused?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Planet DebianWouter Verhelst: SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another.

This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy.

As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself.

The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all).

Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually.

I'll upload it to Debian soon, too.

Worse Than FailureCodeSOD: Terned Around About Nullables

John H works with some industrial devices. After a recent upgrade at the falicity, the new control software just felt like it was packed with WTFs. Fortunately, John was able to get at the C# source code for these devices, which lets us see some of the logic used…

public bool SetCrossConveyorDoor(CrossConveyorDoorInfo ccdi, bool setOpen) { if (!ccdi.PowerBoxId.HasValue) return false; ulong? powerBoxId = ccdi.PowerBoxId; ulong pbid; ulong ccId; ulong rowId; ulong targetIdx; PBCrossConveyorConfiguration.ExtractIdsFromPowerboxId(powerBoxId.Value, out pbid, out ccId, out rowId, out targetIdx); TextWriter textWriter = Console.Out; object[] objArray1 = new object[8]; objArray1[0] = (object) pbid; objArray1[1] = (object) ccId; objArray1[2] = (object) setOpen; object[] objArray2 = objArray1; powerBoxId = ccdi.PowerBoxId; ulong local = powerBoxId.Value; objArray2[3] = (object) local; objArray1[4] = (object) pbid; objArray1[5] = (object) ccId; objArray1[6] = (object) rowId; objArray1[7] = (object) targetIdx; object[] objArray3 = objArray1; textWriter.WriteLine( "Sending CCD command to pbid = {0}, ccdId = {1}, Open={2}, orig PowerBoxId: {3} - divided:{4}/{5}/{6}/{7}", objArray3); bool? nullable1 = this.CopyDeviceToRegisters((int) (ushort) ccId); if ((!nullable1.GetValueOrDefault() ? 1 : (!nullable1.HasValue ? 1 : 0)) != 0) return false; byte? nullable2 = this.ReadDeviceRegister(19, "CrossConvDoor"); byte num = nullable2.HasValue ? nullable2.GetValueOrDefault() : (byte) 0; byte registerValue = setOpen ? (byte) ((int) num & -225 | 1 << (int) targetIdx) : (byte) ((int) num & -225 | 16); Console.Out.WriteLine("ccdid = {0} targetIdx = {1}, b={2:X2}", (object) ccId, (object) targetIdx, (object) registerValue); this.WriteDeviceRegister(19, registerValue, "CrossConvDoor"); nullable1 = this.CopyRegistersToDevice(); return nullable1.GetValueOrDefault() && nullable1.HasValue; }

There's a bunch in here, but I'm going to start at the very bottom:

return nullable1.GetValueOrDefault() && nullable1.HasValue

GetValueOrDefault, as the name implies, returns the value of the object, or if that object is null, it returns a suitable default value. Now, for any referenece type, that can still be null. But nullable1 is a boolean (defaults to false), and nullable2 is a byte (defaults to zero).

This line alone makes one suspect that the developer doesn't really understand how nullables work. And, as we read up the code, we see more evidence of this:

byte num = nullable2.HasValue ? nullable2.GetValueOrDefault() : (byte) 0;

Again, if nullable2 has a value, GetValueOrDefault will return that value, if it doesn't, it returns zero. So we've just taken a simple thing and made it less readable by surrounding it with a bunch of noise which doesn't change its behavior.

But, continuing reading backwards:

if ((!nullable1.GetValueOrDefault() ? 1 : (!nullable1.HasValue ? 1 : 0)) != 0) return false;

We've moved into nested ternaries inside an if. Which, if we try and parse through this one: if the nullable's value is false, 1 != 0, so we return false. If, on the other hand, the nullable's value is true, we check to see if it doesn't have a value, in which case we compare 1 != 0 and return false. Except the only way nullable1 could ever be true is if it has a value, so that means if nullable1 is true, we don't return false.

In other words, this is a really complicated way of saying:

if (!nullable1.GetValueOrDefault()) return false;

With all that out of the way, it brings us to the block of objArrays. The core purpose of this block is to populate what appears to be logging output. Now, the WriteLine method does take an object[] parameter to drive that formatting… but it's a param-array, which means you could invoke it as: Console.Out.WriteLine("…", pbid, ccId, setOpen…). I'm not 100% certain when params appeared in C#, and a cursory searching implies that it's always been a language feature. Still, I'll give the developer responsible the benefit of the doubt on just using the object[], because of how they used it.

They start with objArray1, and populate three fields. Then they create objArray2 which is just a reference to objArray1. They populate the fourth field through objArray2, then go back to using objArray1. Then they create objArray3 which is also just referencing objArray1, and send that to WriteLine.

Maybe the goal was some form of intentional obfuscation? Were they just… confused? It's impossible to guess.

So instead of guessing, I'll just share another snippet of code from the same program, which I think sums up my feelings:

private static void GenPwd(string[] args) { if (args[1].Contains("!")) Console.Out.WriteLine("Use password without tilde (~) please."); … }
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: The Problem with Work

Review: The Problem with Work, by Kathi Weeks

Publisher: Duke University Press
Copyright: 2011
ISBN: 0-8223-5112-9
Format: Kindle
Pages: 304

One of the assumptions baked deeply into US society (and many others) is that people are largely defined by the work they do, and that work is the primary focus of life. Even in Marxist analysis, which is otherwise critical of how work is economically organized, work itself reigns supreme. This has been part of the feminist critique of both capitalism and Marxism, namely that both devalue domestic labor that has traditionally been unpaid, but even that criticism is normally framed as expanding the definition of work to include more of human activity. A few exceptions aside, we shy away from fundamentally rethinking the centrality of work to human experience.

The Problem with Work begins as a critical analysis of that centrality of work and a history of some less-well-known movements against it. But, more valuably for me, it becomes a discussion of the types and merits of utopian thinking, including why convincing other people is not the only purpose for making a political demand.

The largest problem with this book will be obvious early on: the writing style ranges from unnecessarily complex to nearly unreadable. Here's an excerpt from the first chapter:

The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.

This is only a quarter of a paragraph, and the entire book is written like this.

I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks.

The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me.

The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s.

Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself.

Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought.

Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible.

The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach.

I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me.

Rating: 5 out of 10

Cory DoctorowBreaking In (fixed)

Judith Merril introducing Doctor Who on TVOntario, some time in the 1970s.

This week on my podcast, I read my latest Locus column, Breaking In, on the futility of seeking career advice from established pros who haven’t had to submit over the transom in 20 years, where you should get career advice, and what more established writers can do for writers who are just starting out.



Kevin RuddDer Spiegel: A Cold War with China Is Probable and Not Just Possible

Interview Conducted by Bernhard Zand

The sparsely populated, prosperous and peaceful country of Australia doesn’t often find itself dominating the news cycle, but for the last several days, it has been the focus of governments in the United States, China and the European Union, the great powers in a tri-polar world order.

Last week, Canberra, Washington and London reached agreement on a military pact reminiscent of the era of nuclear standoffs. The alliance, known as AUKUS, foresees Australia being outfitted with nuclear-powered submarines from the U.S. and Britain. It is a reaction to China’s rise to becoming the dominant economic and military power in the Indo-Pacific region.

Australia, located in the Far East but politically part of the West, lies on the fault line of the largest conflict of our times, the growing rivalry between China and the U.S.

With its close economic ties to China as a supplier of raw materials and foodstuffs, Australia recognized earlier than other countries the opportunities presented by Beijing’s rise – and the risks. As early as the beginning of the last decade, the Australian government concluded that it needed to bolster its maritime power. The country tendered a multibillion-dollar contract for the construction of 12 conventionally powered submarines.

The deal, for which the German arms manufacturer ThyssenKrupp also submitted a bid, ultimately went to the Naval Group in France, with the first submarines scheduled for delivery in 2027. Officially, Canberra remained committed to the deal until just a few weeks ago, even as technical delays and spiraling costs threatened it with collapse. Then, last Thursday, Australia pulled the plug, announcing its alliance with Washington and London and backing out of the contract with the French.

The political consequences have been significant. Paris feels as though it has been hoodwinked by Australia and its NATO allies, the U.S. and Britain. France temporarily recalled its ambassadors from Washington and Canberra. In Brussels, meanwhile, the debate over Europe’s “strategic autonomy” has been reopened and new questions have arisen regarding the efficacy of NATO, which French President Emmanuel Macron already referred to back in 2019 as “brain dead.”

There is hardly a politician in existence who has a better handle on the background and the strategic consequences of this explosive arms deal than Kevin Rudd.

The 64-year-old served as prime minister of Australia from 2007 to 2010 before becoming foreign minister and then, in 2013, prime minister again for a brief stint. During his first and second tenures at the top, he was also the leader of the Australian Labor Party.

A sinologist by training, Rudd pursued a diplomatic career before entering politics, first in Stockholm and then in Beijing, where he closely followed the actions of the Politburo of the Chinese Communist Party.

Today, Rudd is president of the Asia Society, a non-governmental organization based in New York, which is focused on deepening ties between Asia and the West.

DER SPIEGEL: Mr. Rudd, the 20th century was ravaged by two world wars, both of which began in Europe. Might we be facing a massive confrontation in the Pacific in the 21st century?

Kevin Rudd: It is quite possible. It is not probable, but it is sufficiently possible to be dangerous. And that is why intelligent statesmen and women have to do two things. First, identify the most effective guardrails to maintain the course of U.S.-China relations, to prevent things from spinning out of control altogether. And second, find a joint strategic framework, which is mutually acceptable in Beijing and Washington, to prevent crisis, conflict and war.

DER SPIEGEL: Germany was on the front lines of the Cold War. Now, in the current confrontation between the U.S. and China, Australia is exposed. Is today’s China as formidable and serious an adversary as the Soviet Union was 60 years ago?

Rudd: If we degenerate into a Cold War – which at this stage is probable and not just possible – then China looms as a much more formidable strategic adversary for the United States than the Soviet Union ever was. At the level of strategic nuclear weapons, China has sufficient capability for a second strike. In the absence of nuclear confrontation, the balance of power militarily, but also economically and technologically, is much more of a problem for the United States in the pan-Asian theater than was the case in Europe.

DER SPIEGEL: Your country, the U.S. and Britain have now entered into a new military alliance, which will provide Australia with a fleet of nuclear-powered submarines. What are the strategic considerations behind this decision?

Rudd: On the question of moving from conventional to nuclear-powered submarines, I have yet to be persuaded by the strategic logic. First, there is a technical argument that has been advanced about the range, detectability and noise levels of conventional submarines versus nuclear powered submarines. This is a technical debate which has not been fully resolved. If it is resolved in favor of nuclear-powered submarines, however, then another question arises.


Rudd: We do not have a domestic civil nuclear industry, so how do we service these submarines? Which then leads to a third problem: If they have to be serviced in the United States and by the United States, does this lead us to a point where such a nuclear-powered submarine fleet becomes an operational unit of the U.S. Navy as opposed to belonging to a strategically sovereign and autonomous Royal Australian Navy? These questions haven’t been resolved yet in the Australian mind, which is why the alternative government from the Australian Labor Party, while providing in principle support for the decision, insists that these questions have to be resolved.

“The French have every right to believe that they have been misled.”

DER SPIEGEL: What are the risks?

Rudd: We already knew in 2009 that it was important from an Australian national security perspective to have a greater capability of securing the air and maritime approaches to the Australian continent. So I launched a new defense white paper as prime minister, which recommended the construction of a new fleet of 12 conventionally powered submarines, which would make the Australian conventional submarine fleet the second largest in East Asia. The sudden change to a nuclear-powered option comes fully eight years after the conservative government of Australia inherited that defense white paper, commissioned tenders for it to be filled – which were won by the French contractor Naval Group in 2016 – and then proceeded to cancel the contract in the middle of the night in 2021. The Australian government has yet to provide a convincing strategic rationale for that decision. Nor has it been frank about the unspecified cost of building nuclear-powered boats through some sort of Anglo-American duopoly.

DER SPIEGEL: Either way, France has lost the contract. Do you understand their indignation?

Rudd: Absolutely. Australians take pride in the fact that we are people of our word. Such a U-turn is alien to our character. We don’t do these things. Secondly, if you reach a technical decision to commission nuclear-powered boats as opposed to conventional boats, then you have a duty to tell the French that the project specifications have changed and to invite them to retender for the new project. The French are perfectly capable of building and servicing nuclear-powered submarines. That is why the French, in my judgment, have every right to believe that they have been misled.

DER SPIEGEL: The German company ThyssenKrupp also submitted an offer to build the conventional submarines. In retrospect, was it a blessing for the Germans that they didn’t win it?

Rudd: I regret to say that the current Australian government seems to exhibit what I would describe as a level of Anglophone romance which puzzles the rest of us in this country who are more internationalist in our world view.

DER SPIEGEL: Are you fundamentally in favor of Europe becoming involved militarily in the Indo-Pacific? Britain and France have warships in the region, and Germany has now joined them, with the frigate Bayern.

Rudd: These are obviously sovereign decisions in Berlin and Paris and London, and it depends on the aggregate naval capabilities of our European friends and partners. The more important question is that of developing a common strategy across the board – military, diplomatic, economic – to deal with the problematic aspects of China’s rise. Not all the aspects of China’s rise are problematic, but in a number of them, China is seeking to change the international status quo. The current Australian government’s torpedoing of the submarine contract with France actually renders the possibility of a common, global allied strategy for dealing with China’s rise more problematic and more difficult rather than less.

DER SPIEGEL: Australia, the United States, Japan, and India are members of a loose group of four nations concerned about China’s rise. Is this “Quad” the nucleus of an Indo-Pacific NATO?

Rudd: I think this is a false analogy. NATO has mutual defense obligations. That is not the case with Japan and Australia because we are part of separate bilateral security arrangements with Washington, not a multilateral arrangement. And India is not an ally because it has no formal alliance structure. I think it is unlikely for the foreseeable future that the Quad would evolve into a NATO-type arrangement. However, the Chinese take the Quad seriously because it is becoming a potent vehicle for coordinating a pan-regional strategy for dealing with China’s rise.

DER SPIEGEL: Australia and Germany have extremely close economic ties with China. Have our countries become too dependent on Beijing?

Rudd: Any modern economy does well to diversify. Under Xi Jinping, China’s economic strategy has become increasingly mercantilist. If you are the weaker party in dealing with a mercantilist power, then you will increasingly have terms dictated to you. Another point is this: China’s domestic economic policy is moving in a more statist and less market-oriented direction. We have to ask ourselves whether this will begin to impede China’s economic growth over time and whether China will be as robust in the future. All these are reasons for not pinning all global growth, all European and German export growth, on the future robustness of this one market.

DER SPIEGEL: Australia has been economically punished by China, in part because your government has called for an independent investigation into the origin of the coronavirus pandemic. What can other countries learn from Australia’s experience?

Rudd: The critical lesson in terms of China’s coercive international diplomacy is that it’s far better for countries to act together rather than to act independently and individually. If you look at Beijing’s punitive sanctions against South Korea, against Norway and now against Australia, the Chinese aphorism can be applied everywhere: “sha yi jing bai,” kill one to warn 100. Therefore, the principle for all of us who are open societies and open economies is that if one of us comes under coercive pressure, then it makes sense for us all to act together. And if you want a case study to see how that could be effective, look at the United States. When was the last time you saw the Chinese adopt major coercive action against the U.S.? They haven’t because the U.S. is too big.

“China cannot simply be put to one side and regarded as someone else’s problem.”

DER SPIEGEL: A few days ago, the European Union announced its strategy for the Indo-Pacific. Brussels plans to rely less on military means against China and more on closer cooperation with China’s neighbors – on secure and fair supply chains, and on economic and digital partnerships. What do you think of this approach?

Rudd: In the recent past, the logic in Brussels and many European capitals was pretty simple and went like this: First, China is a security problem for the United States and its Asian allies, but not us in Europe. Second, China presents an economic opportunity for us in Europe, which should be maximized. And third, China represents a human rights problem, which occasionally we’ll engage in with some appropriate forms of political theater. That was the logic, if I may summarize recent history in such a crude Australian haiku.


Rudd: But now, this has evolved. Europeans have experienced cyberattacks of their own. Germany in particular has experienced the consequences of Chinese industrial policy and the aggressive acquisition of German technology, as well as the strategic collaboration between China and Russia, which is now almost a de facto alliance. When I see this evolution reflected in the posture of the G-7, of NATO and of the of the European Union, it’s pointing in a certain direction. The Europeans have finally concluded that China represents a global challenge. The Asia-Pacific region has now evolved westwards, to the Indo-Pacific, through the Suez Canal and into the Mediterranean and Europe itself. China is a global phenomenon, both in terms of opportunities and challenges. There’s not a single country from Lithuania to New Zealand which is not being confronted with the reality of China. China cannot simply be put to one side and regarded as someone else’s problem.

“When it comes to China, Germany is not just another country.”

DER SPIEGEL: German Chancellor Angela Merkel has geared her China policy to Germany’s economic interests and has often been criticized for doing so. Do you agree with this criticism? And what advice would you give Merkel’s successor?

Rudd: I know Angela Merkel reasonably well; she was chancellor when I was prime minister. She is a deeply experienced political leader, respected around the world. And to be fair, the China that she encountered when she first became chancellor under Hu Jintao was a quite a different China to the one which has evolved since the rise of Xi Jinping. In fact, the China of Xi’s first term was different to the China after the 19th Party Congress …

DER SPIEGEL: … when term limitations for his presidency were eliminated.

Rudd: Since then, I have detected some change in the German position. Germany could have vetoed the approaches adopted by the G-7, NATO and the EU. But it chose not to. So if there is some skepticism in the world about German foreign policy under Merkel, it is because Germany has been robust multilaterally in its response to China and much more accommodating bilaterally.

DER SPIEGEL: What does this mean for the next government?

Rudd: Our German friends need to know that the rest of the world observes German politics very closely. And there’s a reason why we do that: Of all Western countries outside the United States, China has the deepest respect for Germany. This has to do with the economic miracle after World War II, the depth of German manufacturing, and the remarkable living standards Germany has been able to generate while still maintaining a posture of environmental sustainability. So when it comes to China, Germany is not just another country. It is the one Western country, outside the United States, which the Chinese predominantly respect.

“Crisis management in 2021 may not be that much better than in July of 1914.”

DER SPIEGEL: After the recent announcement of AUKUS, the security pact between Australia, the UK and the U.S., former British Prime Minister Theresa May warned of the consequences of a military escalation, specifically in the Taiwan Strait. How do you rate this risk?

Rudd: I do not think either Beijing or Washington want a war over the Taiwan Strait as a matter of deliberate policy. Certainly not Beijing in this decade, since it is not yet ready to fight and is still in the middle of a reorganization of its military regions and its joint command structures. Another question is whether an accident could happen, similar to what happened in 1914 after the assassination of the Austrian archduke, which led to the outbreak of World War I.

DER SPIEGEL: What exactly do you have in mind?

Rudd: There are multiple possibilities. A collision of military aircraft or naval vessels, for example. Or some unilateral act by an incoming Taiwanese government – not the current one – taking a much more decisively independent view, could trigger a crisis.

DER SPIEGEL: How could such a crisis be prevented?

Rudd: Crisis management in 2021 may not be that much better than in July of 1914. Therefore, the danger is not war as a consequence of intentional policy action. It’s war as a consequence of miscalculation.

Originally published in Der Spiegel.

Photo: AP Andy Wong

The post Der Spiegel: A Cold War with China Is Probable and Not Just Possible appeared first on Kevin Rudd.


David BrinTransparency, talk of tradeoffs - and pseudonyms

Returning to the topic of transparency...

An article about “Our Transparent Future: No secret is safe in the digital era” - by Daniel C. Dennett and Deb Roy - suggests that transparency will throw us into a bitterly Darwinian era of “all against all.”  What a dismally simplistic, contemptuous and zero-sum view of humanity! That we cannot innovate ways to get positive sum outcomes.   

Oh, I confess things look dark, with some nations, such as China, using ‘social credit' to sic citizens against each other, tattling and informing and doing Big Brother’s work for him. That ancient, zero sum pattern was more crudely followed in almost every past oligarchy, theocracy or kingdom or Sovietsky, where local gossips and bullies were employed by the inheritance brats up-top, to catch neighbors who offended obedient conformity. 

Indeed, a return to that sort of pyramid of power, with non-reciprocal transparency that never shines up at elites – is what humans could very well implement, because our ancestors did that sort of oppression very well. In fact, we are all descended from the harems of those SOBs.

In contrast, this notion of transparency-driven chaos and feral reciprocal predation is just nonsense.  In a full oligarchy, people would thereupon flee to shelter under the New Lords… or else…


…or else, in a democracy we might actually innovate ways to achieve outcomes that are positive sum, based on the enlightenment notion of accountability for all. Not just average folk or even elites, but for  those who would abuse transparency to bully or predate.  If we catch the gossips and voyeurs in the act and that kind of behavior is deemed to be major badness, then the way out is encapsulated in the old SF expression "MYOB!" or "Mind Your Own Business!"

Yeah, yeah, Bill Maher, sure we have wandered away from that ideal at both ends of the political spectrum, amid a tsunami of sanctimony addiction. But the escape path is still there, waiting and ready for us.

It’s what I talked about in The Transparent Society… and a positive possibility that seems to occur to no one, especially not the well-meaning paladins of freedom who wring their hands and offer us articles like this. 

== Talk of Tradeoffs ==

Ever since I wrote The Transparent Society (1997) and even my novel, Earth (1990) I’ve found it frustrating how few of today’s paladins of freedom/privacy and accountability – like good folks at the ACLU and Electronic Frontier Foundation (EFF) – (and I urge you all to join!) – truly get the essence of the vital fight they are in. Yes, it will be a desperate struggle to prevent tyrannies from taking over across the globe and using powers of pervasive surveillance against us, to re-impose 6000 years of dullard/stupid/suicidal rule-by-oligarchy.

I share that worry!  But in their myopic talk of “tradeoffs,” these allies in the struggle to save the Enlightenment Experiment (and thus our planet and species) neglect all too often to ponder the possibility of win-wins… or positive sum outcomes.

There are so many examples of that failure, like short-sightedly trying to ‘ban” facial recognition systems, an utterly futile and almost-blind pursuit that will only be counter-productive. 

But I want to dial in on one myopia, in particular. I cannot name more than four of these activists who has grasped a key element in the argument over anonymity - today's Internet curse which destroys accountability, letting the worst  trolls and despotic provocateurs run wild. 

Nearly all of the privacy paladins dismiss pseudonymity as just another term for the same thing. In fact, it is not; pseudonymity has some rather powerful win-win, positive sum possibilities. 

Picture this. Web sites who are sick of un-accountable behavior might ban anonymity! Ban it... but allow entry to vetted pseudonyms. 

You get one by renting it from a trusted fiduciary that is already in the business of vouching for credentials... e.g. your bank or credit union, or else services set up just for this purpose (let competition commence!)

The pseudonym you rent carries forward with it your credibility ratings in any number of varied categories, including those scored by the site you intend to enter. If you misbehave, the site and/or its members can ding you, holding you accountable, and those dings travel back to the fiduciary you rented the pseudonym from, who will lower your credibility scores accordingly. ...

... with no one actually knowing your true name!  Nevertheless, there is accountability.  If you are a persistent troll, good luck finding a fiduciary who will rent you a pseudonym that will gain you entry anywhere but places where trolls hang out. Yet, still, no one on the internet has to know you are a dog.

I have presented this concept to several banks and/or Credit Unions and it is percolating. A version was even in my novel Earth

Alas, the very concept of positive sum, win-win outcomes seems foreign to the dolorous worrywarts who fret all across the idea realm of transparency/accountability/privacy discussions. 

Still, you can see the concept discussed here: The Brinternet: A Conversation with three top legal scholars

== Surveillance Networks ==

Scream the alarms! “Ring video doorbells, Amazon’s signature home security product, pose a serious threat to a free and democratic society. Not only is Ring’s surveillance network spreading rapidly, it is extending the reach of law enforcement into private property and expanding the surveillance of everyday life,” reports Lauren Bridges in this article from The Guardian.

In fact, Ring owners retain sovereign rights and cooperation with police is their own prerogative, until a search warrant (under probable cause) is served.  While the article itself is hysterical drivel, there is a good that these screams often achieve… simply making people aware. And without such awareness, no corrective precautions are possible. I just wish they provoked more actual thinking.

See this tiny camera disguised in a furniture screw! Seriously. You will not not-be-seen. Fortunately, hiding from being-seen is not the essence of either freedom or privacy. 

Again, that essence is accountability! Your ability to detect and apply it to anyone who might oppress or harm you. Including the rich and powerful. 

We will all be seen. Stop imagining that evasion is an option and turn to making it an advantage. Because if we can see potential abusers and busybodies...

...we just might be empowered to shout: ...MYOB!

Kevin RuddThe Australian: Church has a vital role, but a limited one

By Kevin Rudd.

We still hear calls to “keep religion out of politics”, echoed presently by our Prime Minister, who professes a deep Pentecostal faith and is content to be photographed in worship at election time, but refuses to discuss publicly how his concept of Christian ethics inform his politics. It needn’t be like that.

The Gospel is both a spiritual Gospel and a social Gospel, and if it is a social Gospel then it is in part a political Gospel, because politics is the means by which society chooses to express its collective power. The Gospel is in part an ­exhortation to social action. It doesn’t provide a mathematical formula to answer all the great questions of our age. But it does offer a starting point to debate those questions within an informed Christian ethical framework which always preferences social justice, the poor, and the powerless. And that includes protecting the creation itself.

Greg Craven rightly highlights four solid principles of Catholic ­social teaching: the dignity of the human, the common good, subsidiarity, and solidarity. These are proud principles. One does not have to be Catholic or committed to a distinctive Christian theology to commit to them. It is, however, too harsh to conclude, as Craven does, that “Australian politics in the last 30 years has been more likely to be informed by a kind of disconnected pragmatism than by a framework of principles”.

It’s one thing to enunciate time-honoured principles, but it is another to have them inform public policy and administration. I agree with Craven that “part of the genius of Catholic social teaching is its ability to hold its principles in creative tension” and this can be done without diminishing their potency. Whether liberal, social democrat, or conservative, “we would all like to see more justice, more equality, more liberty, more efficiency in Australian society.” But how is this to be done?

My sense is that Craven sees the teaching of Pope John Paul II in encyclicals such as Sollicitudo rei socialis and Centesimus annus, which in turn were built on the insights of Pope Leo XIII in his 1891 encyclical Rerum novarum, as being the epitome of Catholic social teaching.

The successful approach in modern politics is to commit to dialogue, taking the science seriously, acting on the evidence, and providing the opportunity for all those affected by prospective policies to have a place at the table of political deliberation. It is no longer a matter of popes or bishops from the sidelines laying down immutable principles and univocal responses as to how those principles are to be applied. It is critical the foundations of the faith and ethical imperatives to which they give rise are articulated clearly.

What then is to be done? Of course, the pope proposes the need for education and spirituality. But he dedicates an entire chapter of his encyclical to lines of approach and action. Dialogue is central to every one of them: ­dialogue in the international community, dialogue in national politics, dialogue and transparency in decision-making, dialogue between religion and science, and politics and economy in dialogue with human fulfilment. This is where Catholic social teaching provides ongoing assistance for those of us committed to taking on the big political challenges confronting the planet and every ­nation. Fostering dialogue across national borders, across ideological lines, and across disciplines is the key – while still, for those of us from a Christian tradition, anchored in the deep ethical principles of the faith.

Drawing us back to the principles of Catholic social teaching, the pope is able to call decision-makers to have due regard for the common good and not just the interests of their constituents, and to weigh the interests of future generations and not just those who exercise power and voice at the moment. The inability of politicians on all sides to deliver optimal outcomes on issues such as climate change and inequality warrants the sort of papal corrective which we find in Laudato si’.

Provided popes and their advisers remain engaged and troubled by the challenges of the age, whatever they may be, always participating in humble dialogue with experts and decision-makers, the principles of Catholic social teaching will continue to provide a framework for deliberation and action. But whenever popes and their advisers pontificate about ­solutions and answers comprehensible only to faithful Catholics, forgoing the dialogue with experts and decision-makers or those beyond the church, their teachings will be sterile, dry, and irrelevant to the tasks at hand. At the national level, bishops need to play their part in hosting and fostering such dialogue. But there has not been much of that in Australia these past 30 years. That might be a contributing factor to the malaise in our politics identified by Craven.

Article Published on 25 September 2020.

This article is an extract from Kevin Rudd’s essay in Greg Cravens book Shadow of the Cross available here.

The post The Australian: Church has a vital role, but a limited one appeared first on Kevin Rudd.

Planet DebianJunichi Uekawa: Wrote a HTML ping-like something.

Wrote a HTML ping-like something. Uses fetch to fetch a page and measures time until 404 returns to js. Here's my http ping. The challenge was writing code to calculate standard deviation in multiple languages and making sure it matched, d'oh.


Cryptogram Friday Squid Blogging: Person in Squid Suit Takes Dog for a Walk

No, I don’t understand it, either.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram I Am Not Satoshi Nakamoto

This isn’t the first time I’ve received an e-mail like this:

Hey! I’ve done my research and looked at a lot of facts and old forgotten archives. I know that you are Satoshi, I do not want to tell anyone about this. I just wanted to say that you created weapons of mass destruction where niches remained poor and the rich got richer! When bitcoin first appeared, I was small, and alas, my family lost everything on this, you won’t find an apple in the winter garden, people only need strength and money. Sorry for the English, I am from Russia, I can write with errors. You are an amazingly intelligent person, very intelligent, but the road to hell is paved with good intentions. Once I dreamed of a better life for myself and my children, but this will never come …

I like the bit about “old forgotten archives,” by which I assume he’s referring to the sci.crypt Usenet group and the Cypherpunks mailing list. (I posted to the latter a lot, and the former rarely.)

For the record, I am not Satoshi Nakamoto. I suppose I could have invented the bitcoin protocols, but I wouldn’t have done it in secret. I would have drafted a paper, showed it to a lot of smart people, and improved it based on their comments. And then I would have published it under my own name. Maybe I would have realized how dumb the whole idea is. I doubt I would have predicted that it would become so popular and contribute materially to global climate change. In any case, I did nothing of the sort.

Read the paper. It doesn’t even sound like me.

Of course, this will convince no one who doesn’t already believe. Such is the nature of conspiracy theories.

Cryptogram Tracking Stolen Cryptocurrencies

Good article about the current state of cryptocurrency forensics.

Cryptogram The Proliferation of Zero-days

The MIT Technology Review is reporting that 2021 is a blockbuster year for zero-day exploits:

One contributing factor in the higher rate of reported zero-days is the rapid global proliferation of hacking tools.

Powerful groups are all pouring heaps of cash into zero-days to use for themselves — and they’re reaping the rewards.

At the top of the food chain are the government-sponsored hackers. China alone is suspected to be responsible for nine zero-days this year, says Jared Semrau, a director of vulnerability and exploitation at the American cybersecurity firm FireEye Mandiant. The US and its allies clearly possess some of the most sophisticated hacking capabilities, and there is rising talk of using those tools more aggressively.


Few who want zero-days have the capabilities of Beijing and Washington. Most countries seeking powerful exploits don’t have the talent or infrastructure to develop them domestically, and so they purchase them instead.


It’s easier than ever to buy zero-days from the growing exploit industry. What was once prohibitively expensive and high-end is now more widely accessible.


And cybercriminals, too, have used zero-day attacks to make money in recent years, finding flaws in software that allow them to run valuable ransomware schemes.

“Financially motivated actors are more sophisticated than ever,” Semrau says. “One-third of the zero-days we’ve tracked recently can be traced directly back to financially motivated actors. So they’re playing a significant role in this increase which I don’t think many people are giving credit for.”


No one we spoke to believes that the total number of zero-day attacks more than doubled in such a short period of time — just the number that have been caught. That suggests defenders are becoming better at catching hackers in the act.

You can look at the data, such as Google’s zero-day spreadsheet, which tracks nearly a decade of significant hacks that were caught in the wild.

One change the trend may reflect is that there’s more money available for defense, not least from larger bug bounties and rewards put forward by tech companies for the discovery of new zero-day vulnerabilities. But there are also better tools.

Worse Than FailureError'd: ;pam ;pam ;pam ;pam

One of this week's entries is the type that drives me buggy. Guess which one.

Regular contributor Pascal splains this shopping saga: "Amazon now requires anti-virus software to have an EPA Registration number."



Survey subject Stephen Crocker poses his own research question. "Do they mean click 'Continue' to continue or click 'Continue' to next?" We may never know.



Cartomanic Mike S. thought he'd found a strange new land but it's just the country formerly known as B*****m, rebranding.
"Usually I keep the live downlink TV from the International Space Station, and generally familiar with most of the countries it goes over but this is a new one by me."



An anonymous email address starting with r2d2 bleeped "This website was clearly written specifically for self-loathing bots." Yes, Marvin, we see you. Come in.



For our final number, singer Peter G. sounds off. "Great, just great. I ordered a graphic equaliser and instead they've sent me an amp amp amp amp amp."



[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianDirk Eddelbuettel: digest 0.6.28 on CRAN: Small Enhancements

Release 0.6.28 of the digest package arrived at CRAN earlier today, and has already been uploaded Debian as well.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a mature and widely-used as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes eleven months after the previous releases and rounds out a number of corners. Continuous Integration was updated using r-ci. Several contribututors help with a small fix applied to avoid unaligned reads, a rewording for a help page as well as windows path encoding for in the vectorised use case.

My CRANberries provides the usual summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Krebs on SecurityIndictment, Lawsuits Revive Trump-Alfa Bank Story

In October 2016, media outlets reported that data collected by some of the world’s most renowned cybersecurity experts had identified frequent and unexplained communications between an email server used by the Trump Organization and Alfa Bank, one of Russia’s largest financial institutions. Those publications set off speculation about a possible secret back-channel of communications, as well as a series of lawsuits and investigations that culminated last week with the indictment of the same former federal cybercrime prosecutor who brought the data to the attention of the FBI five years ago.

The first page of Alfa Bank’s 2020 complaint.

Since 2018, access to an exhaustive report commissioned by the U.S. Senate Armed Services Committee on data that prompted those experts to seek out the FBI has been limited to a handful of Senate committee leaders, Alfa Bank, and special prosecutors appointed to look into the origins of the FBI investigation on alleged ties between Trump and Russia.

That report is now public, ironically thanks to a pair of lawsuits filed by Alfa Bank, which doesn’t directly dispute the information collected by the researchers. Rather, it claims that the data they found was the result of a “highly sophisticated cyberattacks against it in 2016 and 2017” intended “to fabricate apparent communications” between Alfa Bank and the Trump Organization.

The data at issue refers to communications traversing the Domain Name System (DNS), a global database that maps computer-friendly coordinates like Internet addresses (e.g., to more human-friendly domain names ( Whenever an Internet user gets online to visit a website or send an email, the user’s device sends a query through the Domain Name System.

Many different entities capture and record this DNS data as it traverses the public Internet, allowing researchers to go back later and see which Internet addresses resolved to what domain names, when, and for how long. Sometimes the metadata generated by these lookups can be used to identify or infer persistent network connections between different Internet hosts.

The DNS strangeness was first identified in 2016 by a group of security experts who told reporters they were alarmed at the hacking of the Democratic National Committee, and grew concerned that the same attackers might also target Republican leaders and institutions.

Scrutinizing the Trump Organization’s online footprint, the researchers determined that for several months during the spring and summer of 2016, Internet servers at Alfa Bank in Russia, Spectrum Health in Michigan, and Heartland Payment Systems in New Jersey accounted for nearly all of the several thousand DNS lookups for a specific Trump Organization server (

This chart from a court filing Sept. 14, 2021 shows the top sources of traffic to the Trump Organization email server over a four month period in the spring and summer of 2016. DNS lookups from Alfa Bank constituted the majority of those requests.

The researchers said they couldn’t be sure what kind of communications between those servers had caused the DNS lookups, but concluded that the data would be extremely difficult to fabricate.

As recounted in this 2018 New Yorker story, New York Times journalist Eric Lichtblau met with FBI officials in late September 2016 to discuss the researchers’ findings. The bureau asked him to hold the story because publishing might disrupt an ongoing investigation. On Sept. 21, 2016, Lichtblau reportedly shared the DNS data with B.G.R., a Washington lobbying firm that worked with Alfa Bank.

Lichtblau’s reporting on the DNS findings ended up buried in an October 31, 2016 story titled “Investigating Donald Trump, F.B.I. Sees No Clear Link to Russia,” which stated that the FBI “ultimately concluded that there could be an innocuous explanation, like marketing email or spam,” that might explain the unusual DNS connections.

But that same day, Slate’s Franklin Foer published a story based on his interactions with the researchers. Foer noted that roughly two days after Lichtblau shared the DNS data with B.G.R., the Trump Organization email server domain vanished from the Internet — its domain effectively decoupled from its Internet address.

Foer wrote that The Times hadn’t yet been in touch with the Trump campaign about the DNS data when the Trump email domain suddenly went offline.  Odder still, four days later the Trump Organization created a new host — — and the very first DNS lookup to that new domain came from servers at Alfa Bank.

The researchers concluded that the new domain enabled communication to the very same server via a different route.

“When a new host name is created, the first communication with it is never random,” Foer wrote. “To reach the server after the resetting of the host name, the sender of the first inbound mail has to first learn of the name somehow. It’s simply impossible to randomly reach a renamed server.”

“That party had to have some kind of outbound message through SMS, phone, or some noninternet channel they used to communicate [the new configuration],” DNS expert Paul Vixie told Foer. “The first attempt to look up the revised host name came from Alfa Bank. If this was a public server, we would have seen other traces. The only look-ups came from this particular source.”


Both the Trump organization and Alfa Bank have denied using or establishing any sort of secret channel of communications, and have offered differing explanations as to how the data gathered by the experts could have been faked or misinterpreted.

In a follow-up story by Foer, the Trump Organization suggested that the DNS lookups might be the result of spam or email advertising various Trump properties, and said a Florida based marketing firm called Cendyn registered and managed the email server in question.

But Cendyn told CNN that its contract to provide email marketing services to the Trump Organization ended in March 2016 — weeks before the DNS lookups chronicled by the researchers started appearing. Cendyn told CNN that a different client had been communicating with Alfa Bank using Cendyn communications applications — a claim that Alfa Bank denied.

Alfa Bank subsequently hired computer forensics firms Mandiant and Stroz Friedberg to examine the DNS data presented by the researchers. Both companies concluded there was no evidence of email communications between Alfa Bank and the Trump Organization. However, both firms also acknowledged that Alfa Bank didn’t share any DNS data for the relevant four-month time period identified by the researchers.

Another theory for the DNS weirdness outlined in Mandiant’s report is that Alfa Bank’s servers performed the repeated DNS lookups for the Trump Organization server because its internal Trend Micro antivirus product routinely scanned domains in emails for signs of malicious activity — and that incoming marketing emails promoting Trump properties could have explained the traffic.

The researchers maintained this did not explain similar and repeated DNS lookups made to the Trump Organization email server by Spectrum Health, which is closely tied to the DeVos family (Betsy DeVos would later be appointed Secretary of Education by President Trump).


In June 2020, Alfa Bank filed two “John Doe” lawsuits, one in Pennsylvania and another in Florida. Their stated purpose was to identify the anonymous hackers behind the “highly sophisticated cyberattacks” that they claim were responsible for the mysterious DNS lookups.

Alfa Bank has so far subpoenaed at least 49 people or entities — including all of the security experts quoted in the 2016 media stories referenced above, and others who’d merely offered their perspectives on the matter via social media. At least 15 of those individuals or entities have since been deposed. Alfa Bank’s most recent subpoena was issued Aug. 26, 2021.

L. Jean Camp, a professor at the Indiana University School of Informatics and Computing, was among the first to publish some of the DNS data collected by the research group. In 2017, Alfa Bank sent Camp a series of threatening letters suggesting she was “a central figure” in the what the company would later claim was “malicious cyber activity targeting its computer network.” The letters and responses from her attorneys are published on her website.

Camp’s attorneys and Indiana University have managed to keep her from being deposed by both Alfa Bank and John H. Durham, the special counsel appointed by the Trump administration to look into the origins of the Russia investigation (although Camp said Alfa Bank was able to obtain certain emails through the school’s public records request policy).

“If MIT had had the commitment to academic freedom that Indiana University has shown throughout this entire process, Aaron Swartz would still be alive,” Camp said.

Camp said she’s bothered that the Alfa Bank and Trump special counsel investigations have cast the researchers in such a sinister light, when many of those subpoenaed have spent a lifetime trying to make the Internet more secure.

“Not including me, they’ve subpoenaed some people who are significant, consistent and important contributors to the security of American networks against the very attacks coming from Russia,” Camp said. “I think they’re using law enforcement to attack network security, and to determine the ways in which their previous attacks have been and are being detected.”

Nicholas Weaver, a lecturer at the computer science department at University of California, Berkeley, told KrebsOnSecurity he complied with the subpoena requests for specific emails he’d sent to colleagues about the DNS data, noting that Alfa Bank could have otherwise obtained them through the schools’ public records policy.

Weaver said Alfa Bank’s lawsuit has nothing to do with uncovering the truth about the DNS data, but rather with intimidating and silencing researchers who’ve spoken out about it.

“It’s clearly abusive, so I’m willing to call it out for what it is, which is a John Doe lawsuit for a fishing expedition,” Weaver said.


Among those subpoenaed and deposed by Alfa Bank was Daniel J. Jones, a former investigator for the FBI and the U.S. Senate who is perhaps best known for his role in leading the investigation into the U.S. Central Intelligence Agency’s use of torture in the wake of the Sept. 11 attacks.

Jones runs The Democracy Integrity Project (TDIP), a nonprofit in Washington, D.C. whose stated mission includes efforts to research, investigate and help mitigate foreign interference in elections in the United States and its allies overseas. In 2018, U.S. Senate investigators asked TDIP to produce and share a detailed analysis of the DNS data, which it did without payment. That lengthy report was never publicly released by the committee nor anyone else.

That is, until Sept. 14, 2021, when Jones and TDIP filed their own lawsuit against Alfa Bank. According to Jones’ complaint, Alfa Bank had entered into a confidentiality agreement regarding certain sensitive and personal information Jones was compelled to provide as part of complying with the subpoena.

Yet on Aug. 20, Alfa Bank attorneys sent written notice that it was challenging portions of the confidentiality agreement. Jones’ complaint asserts that Alfa Bank intends to publicly file portions of these confidential exhibits, an outcome that could jeopardize his safety.

This would not be the first time testimony Jones provided under a confidentiality agreement ended up in the public eye. TDIP’s complaint notes that before Jones met with FBI officials in 2017 to discuss Russian disinformation campaigns, he was assured by two FBI agents that his identity would be protected from exposure and that any information he provided to the FBI would not be associated with him.

Nevertheless, in 2018 the House Permanent Select Committee on Intelligence released a redacted report on Russian active measures. The report blacked out Jones’ name, but a series of footnotes in the report named his employer and included links to his organization’s website. Jones’ complaint spends several pages detailing the thousands of death threats he received after that report was published online.


As part of his lawsuit against Alfa Bank, Jones published 40 pages from the 600+ page report he submitted to the U.S. Senate in 2018. From reviewing its table of contents, the remainder of the unpublished report appears to delve deeply into details about Alfa Bank’s history, its owners, and their connections to the Kremlin.

The report notes that unlike other domains the Trump Organization used to send mass marketing emails, the domain at issue — — was configured in such a way that would have prevented it from effectively sending marketing or bulk emails. Or at least prevented most of the missives sent through the domain from ever making it past spam filters.

Nor was the domain configured like other Trump Organization domains that demonstrably did send commercial email, Jones’ analysis found. Also, the domain was never once flagged as sending spam by any of the 57 different spam block lists published online at the time.

“If large amounts of marketing emails were emanating from, it’s likely that some receivers of those emails would have marked them as spam,” Jones’ 2018 report reasons. “Spam is nothing new on the internet, and mass mailings create easily observed phenomena, such as a wide dispersion of backscatter queries from spam filters. No such evidence is found in the logs.”

However, Jones’ report did find that was configured to accept incoming email. Jones cites testing conducted by one of the researchers who found the rejected messages with an automated reply saying the server couldn’t accept messages from that particular sender.

“This test reveals that either the server was configured to reject email from everyone, or that the server was configured to accept only emails from specific senders,” TDIP wrote.

The report also puts a finer point on the circumstances surrounding the disappearance of that Trump Organization email domain just two days after The New York Times shared the DNS data with Alfa Bank’s representatives.

“After the record was deleted for on Sept. 23, 2016, Alfa Bank and Spectrum Health continued to conduct DNS lookups for,” reads the report. “In the case of Alfa Bank, this behavior persisted until late Friday night on Sept. 23, 2016 (Moscow time). At that point, Alfa Bank ceased its DNS lookups of”

Less than ten minutes later, a server assigned to Alfa Bank was the first source in the DNS data-set examined (37 million DNS records from January 1, 2016 to January 15, 2017) to conduct a DNS look-up for the server name ‘’ The answer received was — the same IP address used for that was deleted in the days after The New York Times inquired with Alfa Bank about the unusual server connections.

“No servers associated with Alfa Bank ever conducted a DNS lookup for again, and the next DNS look-up for did not occur until October 5, 2016,” the report continues. “Three of these five look-ups from October 2016 originated from Russia.”

A copy of the complaint filed by Jones against Alfa Bank is available here (PDF).


The person who first brought the DNS data to the attention of the FBI in Sept. 2016 was Michael Sussmann, a 57-year-old cybersecurity lawyer and former computer crimes prosecutor who represented the Democratic National Committee and Hillary Clinton’s presidential campaign.

Last week, the special counsel Durham indicted Sussmann on charges of making a false statement to the FBI. The New York Times reports the accusation focuses on a meeting Sussmann had Sept. 19, 2016 with James A. Baker, the FBI’s top lawyer at the time. Sussmann had reportedly met with Baker to discuss the DNS data uncovered by the researchers.

“The indictment says Mr. Sussmann falsely told the F.B.I. lawyer that he had no clients, but he was really representing both a technology executive and the Hillary Clinton campaign,” The Times wrote.

Sussmann has pleaded not guilty to the charges.


The Sussmann indictment refers to the various researchers who contacted him in 2016 by placeholder names, such as Tech Executive-1 and Researcher-1 and Researcher-2. The tone of indictment reads as if describing a vast web of nefarious or illegal activities, although it doesn’t attempt to address the veracity of any specific concerns raised by the researchers.  Here is one example:

“From in or about July 2016 through at least in or about February 2017, however, Originator-I, Researcher-I, and Researcher-2 also exploited Internet Company­-1′ s data and other data to assist Tech Executive-I in his efforts to conduct research concerning Trump’s potential ties to Russia.”

Quoting from emails between Tech Executive-1 and the researchers, the indictment makes clear that Mr. Durham has subpoenaed many of the same researchers who’ve been subpoenaed and or deposed in the concurrent John Doe lawsuits from Russia’s Alfa Bank.

To date, Alfa Bank has yet to name a single defendant in its lawsuits. In the meantime, the Sussmann indictment is being dissected by many users on social media who have been closely following the Trump administration’s inquiry into the Russia investigation. The majority of these social media posts appear to be crowdsourcing an effort to pinpoint the real-life identities behind the placeholder names in the indictment.

At one level, it doesn’t matter which explanation of the DNS data you believe: There is a very real possibility that the way this entire inquiry has been handled could negatively affect the FBI’s ability to collect crucial and sensitive investigative tips for years to come.

After all, who in their right mind is going to volunteer confidential information to the FBI if they fear there’s even the slightest chance that future shifting political winds could end up seeing them prosecuted, threatened with physical violence or death on social media, and/or exposed to expensive legal fees and depositions from private companies as a result?

Such a perception could give rise to a sort of “chilling effect,” discouraging honest, well-meaning people from speaking up when they suspect or know about a potential threat to national security or sovereignty.

This would be a less-than-ideal outcome in the context of today’s top cyber threat for most organizations: Ransomware. With few exceptions, the U.S. government has watched helplessly as organized cybercrime gangs — many of whose members hail from Russia or from former Soviet nations that are friendly to Moscow — have extorted billions of dollars from victims, and disrupted or ruined countless businesses.

To help shift the playing field against ransomware actors, the Justice Department and other federal law enforcement agencies have been trying to encourage more ransomware victims to come forward and share sensitive details about their attacks. The U.S. government has even offered up to $10 million for information leading to the arrest and conviction of cybercriminals involved in ransomware.

But given the way the government has essentially shot all of the messengers with its handling of the Sussmann case, who could blame those with useful and valid tips if they opted to stay silent?

Cryptogram ROT8000

ROT8000 is the Unicode equivalent of ROT13. What’s clever about it is that normal English looks like Chinese, and not like ciphertext (to a typical Westerner, that is).

Kevin RuddABC Radio National Breakfast: Kevin Rudd on Scott Morrison’s handling of nuclear subs deal


23 September 2021 – ABC Radio National

Scott Morrison
We understand the disappointment, and that is the way you manage difficult issues. It’s a difficult decision. It’s a very difficult decision. And of course, we had to weigh up, what would be the the obvious disappointment to France. But at the end of the day, as a government, we have to do what is right for Australia and serve Australia’s national security interests. And I will always choose Australia’s national security interests first.

Fran Kelly
That’s the Prime Minister speaking from Washington just a short time ago. Well, former prime minister Kevin Rudd has weighed into this whole issue. He’s written an opinion piece for the French newspaper Le Monde, in which he describes the decision to tear up the contract quote, “as a foreign policy debacle”. Kevin Rudd welcome again to breakfast.

Kevin Rudd
Good to be with you Fran.

Fran Kelly
Kevin Rudd, it’s one thing for a former prime minister to criticize Australian policy here at home. It’s another thing to do it in the pages of a newspaper abroad. Is it disloyal for a former pm to go public, take their criticisms, their own country overseas like this? Did you consider this?

Kevin Rudd
Absolutely, you will see that Fran, from the first paragraph of my opinion piece in Le Monde, a day or so ago, which says, it is not usual to write such things in a foreign newspaper as an opinion piece. But this is a matter of such an order of magnitude, given the depth of Australia’s long term political and strategic relationship with France, more broadly with Europe, the impact of this decision in South-East Asia and now forcing President Biden into, frankly, a humiliating apology to the French in the joint statement issued between Macron, the French president and himself following their bilateral discussion yesterday. This has got foreign policy debacle written all over it. That’s why I’ve weighed into this debate. Because I believe as someone who is responsible back in 2012, for negotiating with the French, the joint strategic framework between Australia and France, that it was important to engage in the debate in the way in which I have.

Fran Kelly
Why though did you feel it was your duty to express your deep regret at the way this decision was handled by the Morrison government you know, and to do it so directly to the French people.

Kevin Rudd
Because there has been an enormous investment by Australia, not just under my government, but also under Turnbull’s government in building a broad strategic relationship with the French. The French are members of the UN Security Council. They are members of the G7, members of the G20, where Australia is also a member. Together with Germany, they drive the future of the European Union, as well as therefore the future of Australian trade interests in Brussels. And therefore, it’s important for the wider French public to know that there are reservations in Australia, both from myself and frankly, from former prime minister Turnbull about the way in which this matter has been handled. It has been a debacle. I return to what I said about the joint statement issued by Biden and Macron. What Morrison has done by insisting on secrecy in the way in which this notification to the French was going to occur, has been driven in my judgment by his domestic political interests in Australia. The normal way you would treat an ally, if there was a bonafide reason for changing the project design from conventional submarines to nuclear powered submarines. The basic requirement is to bring in the French ambassador, speak to the French President, speak to the French contractors, Naval, and if you’re going to go to nuclear powered vessels, then to retender the process, and invite the French, the British and the Americans to participate. That’s the way in which a professional government would handle this. Not the rolling amateur hour stuff we’ve seen from Morrison,

Fran Kelly
You were very strong in the piece about, you didn’t mention the word amateur hour, but that’s the description, basically, if you read between the lines you write the Morrison government, quote, “failed to adhere to basic diplomatic protocols by not telling the French until the very last moment”. This was tantamount, quote “to deceptive and misleading conduct”. So the Prime Minister says, you know, confidentiality, secrecy needed to be adhered to, in order to make this occur. What do you believe the Prime Minister should have done once it was decided that the newest US nuclear submarines were in Australia’s best strategic interests?

Kevin Rudd
Well, it’s interesting you used the phrase in order to make this occur, that secrecy was necessary. If that was the view, for example, for deep strategic reasons between ourselves and say, the United States. Why did President Biden co author with Emmanuel Macron, the President of France today, a statement which says and I quote, “The two leaders agreed that the situation would have benefited from open consultation among allies on matters of strategic interest to France and our European partners. President Biden conveyed his ongoing commitment in that regard”, unquote. That’s Biden disagreeing now with the secrecy which I assume Morrison requested of the Americans in the first place. So why were they secret? I assume, where this has come from is not a deep strategic debate about the future nature of Australia’s submarine fleet, though that will partly influence it. The secrecy factor has proceeded from what really drives Morrison here, which is a domestic political agenda shift given pandemic impacts on his government’s re-electability and the desperate need to have a massive agenda shift to national security, with him looking hairy chested on China with new nuclear boats, and the Australian Labour Party in his hopes and wildest dreams looking like a bunch of pacifists. That’s what this was about. But Biden has now blown the whistle on it by publicly apologizing to the French for the way in which this is handled, leaving our bloke, Morrison out there like a shag on a rock.

Fran Kelly
Well our bloke Morrison, as you describe him, says the French had been well aware of concerns that Australia had with the submarines for some time now in terms of their suitability cost and timing blows. He said he’d had conversations along those lines with the President himself. And he explained that when the submarine deal was signed in 2016, the US was unprepared to share its nuclear technology, that’s changed now because of the threats facing our region. I mean, first off, do you accept even if you don’t like the way it was handled, do you accept this was a decision taken in Australia’s national security interest?

Kevin Rudd
I certainly believe the national security community in Canberra would be examining and reexamining the nature of the boats that we need given our strategic circumstances. But I can say however, also is that the substance of the recommendation about the nature of the vessels, their relative stealth, their ability not to be detected by any other Navy, their requirement for regular snorkelling, the signatures of the individual boats are all matters a rolling technical analysis, I accept that. What I do not accept is the sudden dramatic attempted political wow factor by this particular announcement when the only explanation for secrecy about the unilateral cancellation of the French contract is that Morrison was seeking a wow factor in relation to Australian domestic politics, and possibly a broader wow factor in the international community. As a point however, when a wow factor becomes an oops factor, which if you read Biden’s statement this morning, wow has really become oops, from an American perspective,

Fran Kelly
Is it possible, though, that the French reaction the reaction from President Macron is also you know, being exaggerated in a domestic political sense because of an impending election? I mean, is it really I suppose, what I’m really asking, is it really likely in your view that such a deep and long standing relationship that there is between Australia and the French will be permanently damaged by an action like this, you yourself referenced the 50,000 Australian sons buried in French soil from the First World War. Won’t that continue to mean something in fact, something very significant to the French.

Kevin Rudd
The damage caused by this unilateral decision to cancel this project, this $90 billion project will be long standing and will last certainly as long as this incompetent Australian government lasts. The bottom line is a decision to change the nature of the project specification is one thing, botching the diplomatic and political handling of it with the French, a long trusted strategic partner and friend, is something which creates its own set of additional problems. That’s where we find ourselves at present. For the long term, obviously, the French will be playing their own domestic politics on this, I understand that fully. But if you read the text, and you speak with the French government, about the significance which Macron and the French Armed Forces attached to this $90 billion project for French industry in partnership with the Australian Submarine Corporation in Adelaide, plus the fact that from Macron’s own statement that underpinned the entire French engagement and support for a wider Indo Pacific strategy in dealing with China’s rise. Frankly, what I would have wanted to have argued in the cabinet room when Morrison came up with his bright idea about how to handle a change in the boat specification with the French was simply to say, “understand that the French now have every possibility of working against our wider strategic interests, not just in Brussels, but a broader sense of alliance solidarity in dealing with China’s rise”. That’s where the cost to Australia has yet to be fully calculated.

Fran Kelly
Okay, what about the cost in the relationship with the United States you’ve referenced several times already. US President Joe Biden spoke with French President Emmanuel Macron, overnight. He’s acknowledged, Biden has acknowledged, quote, “there could have been greater consultation”, the White House press secretary says the president, quote “holds himself responsible”. You clearly hold Scott Morrison responsible. But do you think Joe Biden also might somewhere, privately be holding Scott Morrison responsible?

Kevin Rudd
I would judge that what has happened here is that somehow the Americans at some level got suckered into what was supposed to be a wow factor for Scott Morrison’s interests in Australian domestic politics, and therefore, the normal approval processes for major decisions of this nature in the US administration somehow, were not deployed. Where were the NATO departments? Where were the European departments? Where were those concerned with nuclear non proliferation? Where were those who had asked this basic question, Fran, can these boats be built in time for Australia? Or is Australia going to end up being lift strategically naked given the massive new build times for nuclear boats assuming they can be delivered and or serviced in Australia? So that seems to be that within the US administration, it was simply not handled properly because Morrison, it seems, insisted on all this secrecy.

Fran Kelly
Just one final question as Scott Morrison calls AUKUS a forever partnership, Paul Keating calls it a backward step to a quote “jaded and faded Anglosphere”, and he’s criticized labour for what he describes as complicity in agreeing to the subs deal which will quote “neuter Australia’s right to strategic autonomy”. Did Anthony Albanese make the wrong call or the right call in your view in backing in the nuclear subs.

Kevin Rudd
Well I certainly have read what Paul Keating has said, but my overall position is simply this, both Albo, Anthony Albanese plus Shadow Foreign Minister Penny Wong have made absolutely the right call, because they’ve provided highly conditional support for this project proceeding. Impact on nonproliferation, impact on Australia’s ability to service these boats, as well as posing questions in the public debate about the future operational sovereignty which Australia would have over the submarine fleet in the future. These are the right national interest questions to raise and conditions to attach for an Australian Labor government to move in full support of this project. So I think they’ve acted appropriately and conditionally.

Fran Kelly
Kevin Rudd, thank you very much for joining us again on breakfast.

Kevin Rudd
Good to be with you.


The post ABC Radio National Breakfast: Kevin Rudd on Scott Morrison’s handling of nuclear subs deal appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Dash of SQL

As developers, we often have to engage with management who doesn't have a clue what it is we do, or how. Even if that manager was technical once, their technical background is frequently out of date, and their spirit has been sapped by the endless meetings and politics that being a manager entails. And it's often these managers who have some degree of control over where our career is going to progress, so we need to make them happy.

Which means… <clickbait-voice>LEVEL UP YOUR CAREER WITH THIS ONE SIMPLE TRICK!</clickbait-voice>. You need to make managers happy, and if there's one thing that makes managers happy, it's dashboards. Take something complicated and multivariate, and boil it down to a simple system. Traffic lights are always a favorite: green is good, red is bad, yellow is also bad.

It sounds stupid, because it is, but one of the applications that got me the most accolades was a dashboard application. It was an absolute trainwreck of code that slurped data from a dozen different silos and munged it together via a process the customer was always tweaking, and turned the complicated mathematics of how much wasteage there was in an industrial process into a simple traffic light icon. Upper managers used it and loved it, because that little glowing green light gave them all the security they needed, and when one of those lights went yellow or worse, red, they could swoop in and do management until the light turned green again.

Well, Kaspar also supports a dashboard application. It also slurps giant piles of data from a variety of sources, and it tries to turn some key metrics into simple letter grades- "A" through "E".

This particular query is about 400 lines of subqueries connected via LEFT JOIN. The whole thing is messy in the way that only giant SQL queries that are trying to restructure and reshape data in extreme ways can be. That's not truly a WTF, but several of these subqueries do something… special.

(select Rating_mangler = case WHEN VALUE = '' THEN '' WHEN a_id in (SELECT FROM actor, f_cache, f_math WHERE AND IN ('L43A0', 'L43A1', 'L43A2A3', 'L33OEKO') AND AND filter = '' AND treaarsregle=1 AND e_count_value=0) THEN '' WHEN VALUE < '1.9999999999' THEN 'E' WHEN VALUE >= '2' and VALUE< '2.99999999' THEN 'D' WHEN VALUE >='3' and VALUE < '3.999999999' THEN 'C' WHEN VALUE >='4' and VALUE < '4.999999999' THEN 'B' ELSE 'A' END, a_id From f_cache, f_math where and in ('L_V_mangler_p') and filter = '' and treAarsregle=1 and pricetype=2 and e_count_hp=0) as Rating_mangler

Specifically, I want to highlight the chain of WHEN clauses in that case. We're translating ranges into letter grades, but those ranges are stored as text. We're doing range queries on on text: WHEN VALUE >= '2' and VALUE< '2.99999999' THEN 'D'.

Now, this has some interesting effects. First, if the VALUE is "20", that's a "D". A value of "100" is going to be an "E". And since it's text, "WTF" is also going to be an "A".

We can hope that input validation at least keeps most of those values out. But this pattern repeats. There are other subqueries in this whole chain. Like:

(select Rating_Arbejdsulykker = case WHEN VALUE = '' THEN '' WHEN VALUE < '1.9999999999' THEN 'E' WHEN VALUE >= '2' and VALUE< '2.99999999' THEN 'D' WHEN VALUE >='3' and VALUE < '3.999999999' THEN 'C' WHEN VALUE >='4' and VALUE < '4.999999999' THEN 'B' ELSE 'A' END, a_id From f_cache, f_math where and in ('L_V_ulykker_p') and filter = '' and treAarsregle=1 and pricetype=2 and e_count_hp=0) as Rating_Arbejdsulykker

And yet again, but for bonus points, we do it using a totally different way of describing the range:

(select Rating_kundetilfredshed = case WHEN a_id in (SELECT FROM actor, f_cache, f_math WHERE AND IN ('L153', 'L153LOYAL') AND AND filter = '' AND treaarsregle=1 AND e_count_value=0) THEN '' WHEN VALUE = '' THEN '' WHEN VALUE = '1' THEN 'E' WHEN VALUE >= '1.000001' and VALUE<= '2.00001' THEN 'D' WHEN VALUE >='2.00001' and VALUE <= '3.00001' THEN 'C' WHEN VALUE >'3.00001' and VALUE <= '4.00001' THEN 'B' ELSE 'A' END, a_id From f_cache, f_math where and in ('L153_AVG') and filter = '' and treAarsregle=1 and pricetype=2 and e_count_hp=0) as Rating_kundetilfredshed

Unlike the others, this one would score values less than "1" as an "A". Which who knows, maybe values less than one are prevented by input validation. Of course, if they stored numbers as numbers then we could compare them as numbers, and all of this would work correctly without having to take it on faith that the data in the database is good.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianDirk Eddelbuettel: prrd 0.0.5: Incremental Mode

prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for Rcpp, RcppArmadillo, RcppEigen, BH, and others.

prrd screenshot image

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session).

This release brings some new features I used of late when testing and re-testing reverse dependencies for Rcpp. Enqueuing jobs can now consider the most recent prior job queue file. This allows us to find new packages that were not part of the previous runs. We added a second toggle to also add those packages who failed in the previous run. Finally, the dequeue interface allows to specify a date (rather than defaulting to the current date, useful for long-running jobs or restarts).

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.5 (2021-09-22)

  • Some remaing http URLs were changed to https.

  • The dequeueJobs script has a new argument date to help specify a queue file.

  • The enqueueJobs can now compute just a ‘delta’ of (new) packages relative to a given prior queuefile and run.

  • When running in ‘delta’ mode, previously failed packages can also be selected.

My CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianGunnar Wolf: New book out! «Mecanismos de privacidad y anonimato en redes, una visión transdisciplinaria»

Three years ago, I organized a fun and most interesting colloquium at Facultad de Ingeniería, UNAM about privacy and anonymity online.

I would have loved to share this earlier with the world, but… The university’s processes are quite slow (and, to be fair, I also took quite a bit of time to push things through). But today, I’m finally happy to share the result of that work with all of you. We managed to get 11 of the talks in the colloquium as articles. The back-cover text reads (in Spanish):

We live in an era where human to human interactions are more and more often mediated by technology. This, of course, means everything leaves a digital trail, a trail that can follow and us relentlessly. Privacy is recognized, however, as a human right — although one that is under growing threats. Anonymity is the best tool to secure it. Throughout history, clear steps have been taken –legally, technically and technologically– to defend it. Various studies point out this is not only a known issue for the network's users, but that a large majority has searched for alternatives to protect their communications' privacy. This book stems from a colloquium held by *Laboratorio de Investigación y Desarrollo de Software Libre* (LIDSOL) of Facultad de Ingeniería, UNAM, towards the end of 2018, where we invited experts from disciplines so far apart as law and systems development, psychology and economics, to contribute with their experiences to a transdisciplinary vision.

If this interests you, you can get the book at our institutional repository.

Oh, and… What about the birds?

In Spanish (Mexican only?), we have a saying, «hay pájaros en el alambre», meaning watch your words, as uninvited people might be listening, as birds resting over the wires over which phone calls used to be made (back in the day where wiretapping was that easy). I found the design proposed by our editor ingenious and very fitting for our topic!

Planet DebianIan Jackson: Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way.

Audiences for this post

  • The educated general reader interested in a case study involving error handling, stability, API design, and/or Rust.
  • Rust users who have tripped over these changes. If this is you, you can cut to the chase and skip to How to fix.

Background and context

Error handling principles

Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config.

If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong.

Rust's portability aims

The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly.

That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever).

Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error.

Rust's stability aims and approach

Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:

  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.

By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves.

This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious.

Rust enums, as relevant to io::ErrorKind

(Very briefly:)

When you have a value which is an io::ErrorKind, you can compare it with specific values:

    if error.kind() == ErrorKind::NotFound { ...
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind() {
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file {}: {}", &file, &error),

Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile.

Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling.

Improving the error categorisation

The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months.

The trouble with Other and tests

Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all".

Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct.

But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other.

Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors.

Unfortunately, the documentation note

Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything.

The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code.

Chosen solution: Uncategorized

The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution:

There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised.

This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change.

The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect.

The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like.

Alternatives considered and rejected by the Rust developers

Not adding more ErrorKinds

This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time.

Just adding ErrorKinds as had been done before

This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally.

Somehow using Rust's Edition system

The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually.

It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling.

Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow.

How to fix code broken by this change

Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right.

How to fix thorough tests

The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure".

What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written".

IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right.

You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures.

Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural.


This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good.

It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.

edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips

comment count unavailable comments

Cryptogram FBI Had the REvil Decryption Key

The Washington Post reports that the FBI had a decryption key for the REvil ransomware, but didn’t pass it along to victims because it would have disrupted an ongoing operation.

The key was obtained through access to the servers of the Russia-based criminal gang behind the July attack. Deploying it immediately could have helped the victims, including schools and hospitals, avoid what analysts estimate was millions of dollars in recovery costs.

But the FBI held on to the key, with the agreement of other agencies, in part because it was planning to carry out an operation to disrupt the hackers, a group known as REvil, and the bureau did not want to tip them off. Also, a government assessment found the harm was not as severe as initially feared.

Fighting ransomware is filled with security trade-offs. This is one I had not previously considered.

Another news story.

Kevin RuddThe Guardian: Paris has a long memory – Scott Morrison’s cavalier treatment of France will hurt Australia

By Kevin Rudd.

Scott Morrison’s determination to put political spin over national security substance in welcoming a new era of nuclear submarines (now to be brought to you exclusively from the Anglosphere) has undermined one of our most enduring and important global relationships – namely the French Republic.

While the prime minister’s office would have been delighted with the television images from Washington and London to show the “fella from down under” mixing it with the big guys and being hairy chested about China, no one there seems to have given a passing thought to the cost to Australian interests that will come from Morrison’s cavalier treatment of France.

There are many reasons to question the wisdom of the government’s hurried decision to “go nuclear” on the eve of a federal election – including the accuracy of technical assumptions concerning the noise footprint of different vessels, their surfacing requirements, their levels of stealth, the ability of Australia in the absence of a domestic nuclear industry to build and service nuclear-powered boats, as well as the implications for full inter-operability with the nuclear fleets of the US and the UK for future combined operations in our region.

These have all been ventilated in the public debate as the government’s rolling incompetence on such a critical project over the last eight years has been put under the microscope. But so far there has been little discussion of the impact of France no longer being Australia’s trusted friend and supporter in critical institutions around the world.

Adjusting the needs of our submarine replacement program based on changing strategic circumstances or critical technical advice is one thing. But doing it without even the most basic of courtesies to the French is another thing altogether.

At the very least, and if for no other reason than to save the Australian taxpayer the billions of dollars already spent (not to mention the lengthy court case that may now ensue if Australia is sued for damages by the French Naval Group), Morrison could have invited France to bid for a new tender, or to continue to provide the hulls while the Americans provided the propulsion for the replacement nuclear-powered boats.

The French have been building nuclear-powered boats for decades.

If, as Morrison would like us to believe, his meeting with Joe Biden in Cornwall in June was widened to include Boris Johnson for the purpose of inking this deal, why did he not advise the French when he visited Paris just a few days later?

If it is has only come about more recently, how could he have allowed Marise Payne and Peter Dutton to underline the importance of the submarine deal to the French just three weeks before the cancellation of the contract?

But, most egregiously, how could he have allowed the French to learn of this via media reports before a call from The Lodge?

For these reasons, it is understandable that France’s foreign minister, Jean-Yves Le Drian, described the move as “a stab in the back”. Had this happened to Australia, we would have reacted in the same way because we would have felt betrayed by a friend.

It might be easy to dismiss the French reaction as diplomatic theatre. But France has now withdrawn its ambassadors from Canberra and Washington.

This is the first time the French have withdrawn their ambassador from the US since they established relations amid the American revolutionary war. Even in the height of their disagreement with Washington over the Iraq war, they did not take this step. Nor did relations between Canberra and Paris sink this low when we took them to the international court of justice over their nuclear testing in the Pacific.

Paris has a long memory.

Now Morrison’s botched diplomacy has reverberated right across the Atlantic, fracturing relations between the US, the UK and France, and undermining western solidarity on the overall challenge of China’s rise. All because Morrison wanted to deliver a huge political agenda shift back in Australia where he is now lagging badly in the polls because of his other major botch job: vaccines, quarantine and the pandemic.

For a middle power like Australia, being so casually prepared to destroy our relationship with France runs the risk of real long-term consequences. As a G7 and G20 economy, a permanent member of the security council, a key member of Nato, one of the two key decision makers within the EU, and a Pacific power at that, France has a big global and regional footprint.

That’s why in 2012 as foreign minister, I negotiated a new joint strategic partnership with France which I signed with my French counterpart in Paris. That agreement covers collaboration across the breadth of foreign and defence policy, trade, investment, technology, international economic policy and climate. Malcolm Turnbull doubled down on that strategic partnership in 2017 before the final submarine deal was even done.

So what could ensue? First, the EU will make decisions after the Glasgow summit on climate change whether to impose “border adjustment” measures – tariffs – against those countries dragging the chain on their national contributions to greenhouse gas emissions.

That means a tax on Australian exports. And which way will Paris now go on that one?

Second, Australia has been frantically seeking to negotiate a free trade agreement with the EU like Canada’s. What are the prospects now of Paris accepting the demands of Australian farmers to have greater access to the European market given France’s historical support for the common agricultural policy?

Third, what about Australia’s interests in the UN and the G7 where France through the global Francophone community carries enormous influence and can therefore frustrate any future Australian multilateral initiative or Australian candidature.

Beyond all this, the horrifying message for our allies, friends and partners around the world is that our word now counts for nothing; that we shouldn’t be trusted; and that ultimately Australia refuses to move beyond the narrow cocoon of the Anglosphere in augmenting its foreign policy and national security interests – precisely at a time when fundamental shifts in the global and regional balance of power are unfolding beneath our feet.

Published 22 September 2021 in The Guardian.

Photograph: Stephen Yang/Reuters

The post The Guardian: Paris has a long memory – Scott Morrison’s cavalier treatment of France will hurt Australia appeared first on Kevin Rudd.

Kevin RuddLe Monde: Canberra’s decision on submarines deepens strategic tensions in Southeast Asia

Written by Kevin Rudd.

It is unusual for a former prime minister of a country to criticise the decisions of a successor prime minster in the opinion pages of a foreign newspaper. While I have long-been fiercely critical of the current conservative government of Australia in our domestic political debate on the overall direction of our country’s foreign policy, in the years since I left office, I have rarely put pen to paper to ventilate such criticism abroad. But given the Australian government’s gross mishandling of its submarine replacement project with France, as well as the importance I attach to Canberra’s strategic relationship with Paris, I believe I have a responsibility as a former prime minister to make plain my own perspective on this most recent and extraordinary foreign policy debacle by the current Australian government.

I believe the Morrison Government’s decision is deeply flawed in a number of fundamental respects. It violates the spirit and letter of the Australia-France strategic framework of 2012 and later enhanced by prime minister Turnbull in 2017. It fails the basic contractual obligation of Australia to consult with the French Naval Group if Australia decided to radically change the tender specification from 12 conventional submarines to 8 nuclear-powered ones. It is wrong that Australia has not offered France the opportunity to re-tender (in part or in whole) for these nuclear boats, despite the fact that France has long-standing experience in making them. Beyond these basic beaches, Morrison also failed to adhere to basic diplomatic protocols in not officially notifying the French government of its unilateral decision prior to the public announcement of the cancellation of the contract. And finally, there is Canberra’s failure to comprehend the repercussions of this decision for France itself – and for broader international solidarity in framing a coordinated response to China’s rise.

Australia’s relationship with France has a long and intimate history. Nearly 50,000 of our sons lie buried in French soil in the defence of France and Belgium in the killing fields of the First World War. These were military theatres in which nearly a quarter of a million Australians had served. Indeed, in 1914, this represented fully 5% of our entire national population. We were also allies together in the Second World War against fascist Germany – including military campaigns against the Vichy in both the Pacific and in the Middle East. My own father, for example, fought with the Free French in the Syrian campaign of 1941. While bilateral relations became deeply strained over French nuclear testing in the South Pacific between the 1960’s and 1990’s, once Paris conducted its last test, relations rapidly normalised. Since then, Australia has welcomed France’s long-standing political presence in the Pacific in New Caledonia, French Polynesia and Wallis and Futuna as stabilising in the wider region. Just as we have valued France’s critical role in the EU, NATO, G7, G20, the UN – and the wider Francophone world.

For these reasons, as prime minister, and foreign minister of Australia, I sought to put our relations with France on a new institutional footing. The then French Foreign Minister, Allain Juppé, and I negotiated the first comprehensive bilateral strategic framework for the relationship which we signed together at the Quai D’Orsay in January 2012. This was entitled the “Joint Statement of Strategic Partnership between France and Australia” and covered the entire field: political, defence, security, economic, energy, transport, education, science, technology, science, environmental, climate change, development assistance and cultural cooperation. It also covered strategic collaboration in the Indo-Pacific region well before other countries (i.e. the United States) believe they had invented the term. This agreement followed an earlier treaty I had negotiated as prime minister with the European Union providing a parallel framework for future global collaboration with Brussels. It was part of a broader vision for Australia, as a member of the G20 and as a middle power with global responsibilities where our relationship with France would become more important in the future, not less.

The point is that the Australia-France submarine contract is not just a commercial agreement. It occurs within this wider official framework. Indeed, it became the ballast of the relationship we had envisaged together back in 2012. The problem for Morrison is that his unilateral decision of 17 September to cancel the submarine project violates both the spirit and, one reading, the letter of our Joint Declaration. Against this background, French Foreign Minister Le Drian is right when he describes Morrison’s action as “a stab in the back”.

Second, while I am not privy to the detail of the contractual agreement between France’s Naval Group and the Australian Department of Defence, it strikes me as a basic protocol that if one of the contracting parties (in this case Australia) was to fundamentally change the project specifications (i.e. from conventional to nuclear-powered subs), it would first require that party to at least notify the other party. To do otherwise would be tantamount to deceptive and misleading conduct. But it seems that the Morrison Government failed to inform Naval in advance.

This brings us to the third error on the part of the Morrison Government. If Morrison had in fact changed course from conventional to nuclear-powered submarines for good technical reasons, then why wouldn’t he re-open competitive tenders for bids from France, the UK and the United States? All three have nuclear-powered boats. All three know how to manufacture them and maintain them. Instead, Morrison decided to limit bids to the Anglosphere alone. This makes no sense in terms of getting the best value for money for the Australian taxpayer. Nor is it fair to our French strategic partners.

I have already referred to Morrison’s failure to adhere to basic diplomatic protocols in the manner in which the French government was informed of his submarine about face. Such a failure is unacceptable between adversaries let alone between allies. But beyond this, it has been Morrison’s failure to understand the wider foreign policy repercussions of his decision that is perhaps the most appalling of all. It has affected European solidarity in forming and consolidating a common strategy for dealing with the impact of China’s global and regional rise. On the eve of the next Quad Summit in Washington, it has rekindled doubts among the other members of the Quad that there is now an inner group of the US and Australia (and now prospectively the UK) and an outer group of India and Japan – doubts already debated in Delhi following America’s unceremonious exit from Afghanistan which delivered a significant strategic win to India’s principal strategic adversary Pakistan. Third, Morrison’s decision has further polarised South East Asian strategic positions on China and the United States where China has already made considerable economic and foreign policy gains. And finally, it lends grist to the mill in China’s global propaganda apparatus that the public political theatre of the submarine announcement with the US and the UK is all about one single strategic objective: containment.

As a former prime minister, I deeply regret the way this decision has been handled by the current Australian government. The cavalier manner in which it has been done does not represent the views of the vast majority of Australians towards France. There may be important strategic or technical reasons to change course with the type of submarines that Australia now needs to build. But none of these justify the treatment of France in this way. These are major matters of state. And they will be deliberated on by the Australian people soberly during our upcoming national elections.

Article originally published in French in Le Monde on 22 September 2021.

Picture: Adam Taylor / PMO



The post Le Monde: Canberra’s decision on submarines deepens strategic tensions in Southeast Asia appeared first on Kevin Rudd.

Worse Than FailureSome Version of a Process

When you're a large company, like Oracle, you can force your customers to do things your way. "Because we said so," is something a company like that can get away with. Conversely, a small company is more restricted- you have to work hard to keep your customers happy.

When Doreen joined Initech, they were a small company with a long history and not too many customers. In the interests of keeping those customers happy, each customer got their own custom build of the software, with features tailored to their specific needs. So, Initrode was on "INITRODE.9.1", while the Soggy Beans coffee shop chain was on "SOGGY.5.2". Managing those versions was a pain, but it was Doreen's boss, Elliot, who ensured that pain escalated to anguish.

Elliot was the one who laid out their software development and source control processes. It was the Official Process™, and Elliot was the owner of the Official Process™. The Official Process™ was the most elegant solution Elliot could imagine: each version lived in its own independent Subversion repository. Changes were synced between those repositories manually. Releases were also manual, and rare. Automated testing was non-existent..

Upper management may not have understood the problems that created, but they knew that their organization was slow to release new features, and that customers were getting frustrated with poor response times to bugs and feature requests. So they went to the list of buzzwords and started pushing for "Agile" and "DevOps" and "Continuous Delivery".

Suddenly, Doreen and the other developers were given a voice. They pushed to adopt Git, over Subversion. "I've looked into this," Elliot said, "and it looks like Git uses GitHub and stores our code off-site. I don't trust things that are off-site. I want our code stored here!"

"No, you don't have to use GitHub," Doreen explained. "We can host our own server- I've been playing around with GitLab, which I think will fit our needs well."

Elliot grumbled and wandered off.

Doreen took a few hours to configure up a GitLab instance, and migrate their many versions of the same code into something approaching a sane branching structure. It'd be a lot of work before the history actually made any sense, but it allowed her to show off some of the benefits, like that it would build and run the handful of unit tests she whipped up on commits to certain branches.

"That's fine," Elliot said, "but where's the code?"

"What… do you mean? It's right here."

"That's the code for Soggy Beans, where's the Initrode version?" Elliot demanded.

Doreen switched branches. "Right here."

"But where did the Soggy Beans version go?!" Elliot was getting angry.

"I… don't understand? It's stored in Git. We're just changing branches."

"I don't like this magical nonsense. I want to see our code in folders, as files, not this invisible shapeshifting stuff! I don't want our code where I can't see it!"

Doreen attempted to explain what branches were, about how Git stored files and tracked versions, but Elliot was already storming off to raise hell with the one upper manager who still listened to him. And a few days later, Elliot came back with a plan.

"So, since we're migrating to Git," Elliot explained to the team, "that poses a few challenges, in terms of the features it lacks. So I've written a script that will supplement it."

The script in question enumerated all the branches and tags in the repository, checked each one out in turn then copied it to another folder. "Once you've run this, you can navigate to the correct folder and make your changes there. If you need to make changes that impact multiple customers, you can repat those changes on each folder. Then you can run this second script, which will copy the changed folders back to the repository and commit it." This was also how code would be deployed: explode the repository out into folders, and copy the appropriate folder to the server.

At first, Doreen figured she could just ignore the script and do things the correct way. But there were a few problems with that. First, Elliot's script created commits that made it look like every file had been changed on every commit, making history meaningless. Second, it required you to be very precise about which branches/versions you were working on, and it was easy to make a mistake and commit changes from one branch into another, which was a mistake Elliot made frequently. He blamed Git for this, obviously.

But third, and most significantly: Elliot's script wasn't a suggestion. It was the Official Process™, and every developer was required to use it. Oh, you could try and "cheat", but your commits would be clean, clear, and comprehensible, which was a dead giveaway that you weren't following the Official Process™.

Doreen left the company a short time later. As far as anyone knows, Elliot still uses his Official Process™.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianNorbert Preining: TeX Live 2021 for Debian

The release of TeX Live 2021 is already half a year away, but due to the delay of waiting for Debian/Bullseye release, we haven’t updated TeX Live in Debian for quite some time. But the waiting is over, today I uploaded the first packages of TeX Live 2021 to unstable.

All the changes listed in the upstream release blog apply also to the Debian packages.

I expect a few hiccups, but it is good to see it out of the door finally.



Planet DebianClint Adams: Outrage culture killed my dog

Why can't Debian have embarrassing flamewars like this thread?

Posted on 2021-09-21
Tags: barks

Kevin RuddWall Street Journal: What Explains Xi’s Pivot to the State?

Written by Kevin Rudd.

Something is happening in China that the West doesn’t understand. In recent months Beijing killed the country’s $120 billion private tutoring sector and slapped hefty fines on tech firms Tencent and Alibaba. Chinese executives have been summoned to the capitol to “self-rectify their misconduct” and billionaires have begun donating to charitable causes in what President Xi Jinping calls “tertiary income redistribution.” China’s top six technology stocks have lost more than $1.1 trillion in value in the past six months as investors scramble to figure out what is going on.

Why would China, which has engaged in fierce economic competition with the West in recent years, suddenly turn on its own like this? While many in the U.S. and Europe may see this as a bewildering series of events, there is a common “red thread” linking all of it. Mr. Xi is executing an economic pivot to the party and the state based on three driving forces: ideology, demographics and decoupling.

Despite the market reforms of the past four decades, ideology still matters to the Chinese Communist Party. At the 19th Party Congress in 2017, Mr. Xi declared that China had entered into a “new era” and that the “principal contradiction” facing the party had changed. Marxist-Leninist language seems arcane to foreigners. A “contradiction” is the interaction between progressive forces pushing toward socialism and the resistance to that change. It is therefore the shifting definition of the party’s principal contradiction that ultimately determines the country’s political direction.

In 1982, Deng Xiaoping redefined the party’s principal contradiction away from Maoist class struggle and toward untrammeled economic development. For the next 35 years, this ideological course set the political parameters for what became the period of “reform and opening.” In 2017 Mr. Xi declared the new contradiction was “between unbalanced and inadequate development” and the need to improve people’s lives.

This might seem a subtle change, but its ideological significance is profound. It authorizes a more radical approach to resolving problems of capitalist excess, from income inequality to environmental pollution. It’s also a philosophy that supports broader forms of state intervention in the Chinese economy—a change that has only become fully realized in the past year.

Demographics is also driving Chinese economic policy to the left. The May 2021 census revealed birthrates had fallen sharply to 1.3—lower than in Japan and the U.S. China is aging fast. The working-age population peaked in 2011 and the total population may now be shrinking. For Mr. Xi, this presents the horrifying prospect China may grow old before it grows rich. He may not therefore be able to realize his dream of making China a wealthy, strong, and global great power by the centenary of the formation of the People’s Republic in 2049.

After a long period of engagement, China now seeks selectively to decouple its economy from the West and present itself as a strategic rival. In 2019 Mr. Xi began talking about a period of “protracted struggle” with America that would extend through midcentury. Lately Mr. Xi’s language of struggle has grown more intense. He has called on cadres to “discard wishful thinking, be willing to fight, and refuse to give way” in preserving Chinese interests.

The forces of ideology, demographics and decoupling have come together in what Mr. Xi now calls his “New Development Concept”—the economic mantra combining an emphasis on greater equality through common prosperity, reduced vulnerability to the outside world and greater state intervention in the economy. A “dual circulation economy” seeks to reduce dependency on exports by making Chinese domestic consumer demand the main driver of growth, while leveraging the powerful gravitational pull of China’s domestic market to maintain international influence. Underpinning this logic is the recent resuscitation of an older Maoist notion of national self-reliance. It reflects Mr. Xi’s determination for Beijing to develop firm domestic control over the technologies that are key to future economic and military power, all supported by independent and controllable supply chains.

Much of the party’s recent crackdown against the Chinese private sector can be understood through this wider lens of Mr. Xi’s “new development concept.” When regulators cracked down on private tutoring it was because many Chinese feel the current economic burden of having even one child is simply too high. When regulators scrutinized data practices, or suspended initial public offerings abroad, it was out of concern about China’s susceptibility to outside pressure. And when cultural regulators banned “effeminate sissies” from television, told Chinese boys to start manning up instead of playing videogames, and issued new school textbooks snappily titled “Happiness Only Comes Through Struggle,” it was all in service of Mr. Xi’s desire to win a generational contest against cultural dependency on the West.

In his overriding quest for re-election to a record third term at the 20th Party Congress in fall 2022, Mr. Xi has apparently chosen to put the solidification of his own domestic political standing ahead of China’s unfinished economic reform project. While the politics of his pivot to the state may make sense internally, if Chinese growth begins to stall Mr. Xi may discover he had the underlying economics very wrong. And in China, as with all countries, ultimate political legitimacy and sustainability will depend on the economy.

Originally Published in the Wall Street Journal on 21 September 2021.

Photo: David Klein WSJ

The post Wall Street Journal: What Explains Xi’s Pivot to the State? appeared first on Kevin Rudd.

Planet DebianRussell Coker: Links September 2021

Matthew Garrett wrote an interesting and insightful blog post about the license of software developed or co-developed by machine-learning systems [1]. One of his main points is that people in the FOSS community should aim for less copyright protection.

The USENIX ATC ’21/OSDI ’21 Joint Keynote Address titled “It’s Time for Operating Systems to Rediscover Hardware” has some inssightful points to make [2]. Timothy Roscoe makes some incendiaty points but backs them up with evidence. Is Linux really an OS? I recommend that everyone who’s interested in OS design watch this lecture.

Cory Doctorow wrote an interesting set of 6 articles about Disneyland, ride pricing, and crowd control [3]. He proposes some interesting ideas for reforming Disneyland.

Benjamin Bratton wrote an insightful article about how philosophy failed in the pandemic [4]. He focuses on the Italian philosopher Giorgio Agamben who has a history of writing stupid articles that match Qanon talking points but with better language skills.

Arstechnica has an interesting article about penetration testers extracting an encryption key from the bus used by the TPM on a laptop [5]. It’s not a likely attack in the real world as most networks can be broken more easily by other methods. But it’s still interesting to learn about how the technology works.

The Portalist has an article about David Brin’s Startide Rising series of novels and his thought’s on the concept of “Uplift” (which he denies inventing) [6].

Jacobin has an insightful article titled “You’re Not Lazy — But Your Boss Wants You to Think You Are” [7]. Making people identify as lazy is bad for them and bad for getting them to do work. But this is the first time I’ve seen it described as a facet of abusive capitalism.

Jacobin has an insightful article about free public transport [8]. Apparently there are already many regions that have free public transport (Tallinn the Capital of Estonia being one example). Fare free public transport allows bus drivers to concentrate on driving not taking fares, removes the need for ticket inspectors, and generally provides a better service. It allows passengers to board buses and trams faster thus reducing traffic congestion and encourages more people to use public transport instead of driving and reduces road maintenance costs.

Interesting research from Israel about bypassing facial ID [9]. Apparently they can make a set of 9 images that can pass for over 40% of the population. I didn’t expect facial recognition to be an effective form of authentication, but I didn’t expect it to be that bad.

Edward Snowden wrote an insightful blog post about types of conspiracies [10].

Kevin Rudd wrote an informative article about Sky News in Australia [11]. We need to have a Royal Commission now before we have our own 6th Jan event.

Steve from Big Mess O’ Wires wrote an informative blog post about USB-C and 4K 60Hz video [12]. Basically you can’t have a single USB-C hub do 4K 60Hz video and be a USB 3.x hub unless you have compression software running on your PC (slow and only works on Windows), or have DisplayPort 1.4 or Thunderbolt (both not well supported). All of the options are not well documented on online store pages so lots of people will get unpleasant surprises when their deliveries arrive. Computers suck.

Steinar H. Gunderson wrote an informative blog post about GaN technology for smaller power supplies [13]. A 65W USB-C PSU that fits the usual “wall wart” form factor is an interesting development.

Worse Than FailureCodeSOD: Globalism

When Daniel was young, he took one of those adventure trips that included a multi-day hike through a rainforest. At the time, it was one of the most difficult and laborious experiences he'd ever had.

Then he inherited an antique PHP 5.3 application, written by someone who names variables like they're spreadsheet columns: $ag, $ah, and $az are all variables which show up. Half of those are globals. The application is "modularized" into many, many PHP files, but this ends up creating include chains tens of files deep, which makes it nigh impossible to actually understand.

But then there are lines like this one:

drdtoarr() { global $arr; return $arr; }

This function uses a global $arr variable and… returns it. That's it, that's the function. This function is used everywhere, especially the variable $arr, which is one of the most popular globals in the application. There is no indication anywhere in the code about what drd stands for, what it's supposed to mean, or why it sometimes maybe is stored in $arr.

While this function seems useless, I'd argue that it has a vague, if limited point. $arr is a global variable that might be storing wildly different things during the lifecycle of the application. drdtoarr at least tells us that we expect to see drd in there.

Now, if only something would tell us what drd actually means, we'd be on our way.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianNorbert Preining: Plasma 5.23 Anniversary Edition Beta for Debian available for testing

Last week has seen the release of the first beta of Plasma 5.23 Anniversary Edition. Work behind the scenes to get this release as soon as possible into Debian has progressed well.

Starting with today, we provide binaries of Plasma 5.23 for Debian stable (bullseye), testing, and unstable, in each case for three architectures: amd64, i386, and aarch64.

To test the current beta, please add

deb ./

to your apt sources (replacing DISTRIBUTION with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable). For further details see this blog.


This is a beta release, and let me recall the warning from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 25th Anniversary Edition Beta in a production environment or as your daily desktop. If you do install Plasma 25th Anniversary Edition Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.


Enjoy, and please report bugs!

David BrinMore (biological) science! Human origins, and lots more...

Sorry for the delay this time, but I'll compensate with new insights into where we came from... 

Not everyone agrees how to interpret the “Big Bang” of human culture that seems to have happened around 40,000 years ago (that I describe and discuss in Existence), a relatively rapid period when we got prolific cave art, ritual burials, sewn clothing and a vastly expanded tool kit… and lost our Neanderthal cousins for debatable reasons. Some call the appearance of a 'rapid shift' an artifact of sparse paleo sampling. V. S. Ramachandran agrees with me that some small inner (perhaps genetic) change had non-linear effects by allowing our ancestors to correlate and combine many things they were already doing separately, with brains that had enlarged to do all those separate things by brute force. Ramachandran suspects it involved “mirror neurons” that allow some primates to envision internally the actions of others. 


My own variant is “reprogrammability…” a leap to a profoundly expanded facility to program our thought processes anew in software (culture) rather than firmware or even hardware. Supporting this notion is how rapidly there followed a series of later “bangs” that led to staged advances in agriculture (with the harsh pressures that came with the arrival of new diets, beer and kings)… then literacy, empires, and (shades of Julian Jaynes!) new kinds of conscious awareness… all the way up to the modern era’s harshly decisive conflict between enlightenment science and nostalgic romanticism.

I doubt it is as simple as "Mirror Neurons." But they might indeed have played a role. The original point that I offered, even back in the nineties, was that we appear to have developed a huge brain more than 200,000 years ago because only thus could we become sufficiently top-predator to barely survive. If we had had reprogrammability and resulting efficiencies earlier, ironically, we could have achieved that stopping place more easily, with a less costly brain... and thus halted the rapid advance. 

It was a possibly-rare sequence... achieving efficiency and reprogrammability AFTER the big brain... that led to a leap in abilities that may be unique in the galaxy. Making it a real pisser that many of our human-genius cousins quail back in terror from taking the last steps to decency and adulthood... and possibly being the rescuers of a whole galaxy.
== And Related ==

There’s much ballyhoo that researchers found that just 1.5% to 7% of the human genome is unique to Homo sapiens, free from signs of interbreeding or ancestral variants.  Only when you stop and think about it, this is an immense yawn.  So Neanderthals and Denisovans were close cousins. Fine. Actually, 1.5% to 7% is a lot!  More than I expected, in fact.


Much is made of the human relationship with dogs…  how that advantage may have helped relatively weak and gracile humans re-emerge from Africa 60,000 years ago or so… about 50,000 years after sturdy-strong Neanderthals kicked us out of Eurasia on our first attempt. But wolves might have already been ‘trained’ to cooperate with those outside their species and pack… and trained by… ravens! At minimum it’s verified the birds will cry and call a pack to a recent carcass so the ‘tooled’ wolves can open it for sharking. What is also suspected is that ravens will summon a pack to potential prey animals who are isolated or disabled, doing for the wolves what dogs later did for human hunting bands.


== Other biological news! ==


A new carnivorous plant - which traps insects using sticky hairs -has been recently identified in bogs of the U.S. Pacific Northwest.


Important news in computational biology. Deep learning systems can now solve the protein folding problem. "Proteins start out as a simple ribbon of amino acids, translated from DNA, and subsequently folded into intricate three-dimensional architectures. Many protein units then further assemble into massive, moving complexes that change their structure depending on their functional needs at a given time. And mis-folded proteins can be devastating—causing health problems from sickle cell anemia and cancer, to Alzheimer’s disease."


"Development of Covid-19 vaccines relied on scientists parsing multiple protein targets on the virus, including the spike proteins that vaccines target. Many proteins that lead to cancer have so far been out of the reach of drugs because their structure is hard to pin down."


The microbial diversity in the guts of today’s remaining hunter-gatherers far exceeds that of people in industrial societies, and researchers have linked low diversity to higher rates of “diseases of civilization,” including diabetes, obesity, and allergies. But it wasn't clear how much today's nonindustrial people have in common with ancient humans. Until bio archaeologists started mining 1000 year old poop -  ancient coprolites preserved by dryness and stable temperatures in three rock shelters in Mexico and the southwestern United States.

The coprolites yielded 181 genomes that were both ancient and likely came from a human gut. Many resembled those found in nonindustrial gut samples today, including species associated with high-fiber diets. Bits of food in the samples confirmed that the ancient people's diet included maize and beans, typical of early North American farmers. Samples from a site in Utah suggested a more eclectic, fiber-rich “famine diet” including prickly pear, ricegrass, and grasshoppers.” Notably lacking -- markers for antibiotic resistance. And they were notably more diverse, including dozens of unknown species. “In just these eight samples from a relatively confined geography and time period, we found 38% novel species.”

Planet DebianReproducible Builds (diffoscope): diffoscope 185 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 185. This version includes the following changes:

[ Mattia Rizzolo ]
* Fix the autopkgtest in order to fix testing migration: the androguard
  Python module is not in the python3-androguard Debian package
* Ignore a warning in the tests from the h5py package that doesn't concern

[ Chris Lamb ]
* Bump Standards-Version to 4.6.0.

You find out more by visiting the project homepage.


Cryptogram Alaska’s Department of Health and Social Services Hack

Apparently, a nation-state hacked Alaska’s Department of Health and Social Services.

Not sure why Alaska’s Department of Health and Social Services is of any interest to a nation-state, but that’s probably just my failure of imagination.

Krebs on SecurityDoes Your Organization Have a Security.txt File?

It happens all the time: Organizations get hacked because there isn’t an obvious way for security researchers to let them know about security vulnerabilities or data leaks. Or maybe it isn’t entirely clear who should get the report when remote access to an organization’s internal network is being sold in the cybercrime underground.

In a bid to minimize these scenarios, a growing number of major companies are adopting “Security.txt,” a proposed new Internet standard that helps organizations describe their vulnerability disclosure practices and preferences.

An example of a security.txt file. Image:

The idea behind Security.txt is straightforward: The organization places a file called security.txt in a predictable place — such as, or What’s in the security.txt file varies somewhat, but most include links to information about the entity’s vulnerability disclosure policies and a contact email address.

The security.txt file made available by USAA, for example, includes links to its bug bounty program; an email address for disclosing security related matters; its public encryption key and vulnerability disclosure policy; and even a link to a page where USAA thanks researchers who have reported important cybersecurity issues.

Other security.txt disclosures are less verbose, as in the case of HCA Healthcare, which lists a contact email address, and a link to HCA’s “responsible disclosure” policies. Like USAA and many other organizations that have published security.txt files, HCA Healthcare also includes a link to information about IT security job openings at the company.

Having a security.txt file can make it easier for organizations to respond to active security threats. For example, just this morning a trusted source forwarded me the VPN credentials for a major clothing retailer that were stolen by malware and made available to cybercriminals. Finding no security.txt file at the retailer’s site using (which checks a domain for the presence of this contact file), KrebsonSecurity sent an alert to its “security@” email address for the retailer’s domain.

Many organizations have long unofficially used (if not advertised) the email address security@[companydomain] to accept reports about security incidents or vulnerabilities. Perhaps this particular retailer also did so at one point, however my message was returned with a note saying the email had been blocked. KrebsOnSecurity also sent a message to the retailer’s chief information officer (CIO) — the only person in a C-level position at the retailer who was in my immediate LinkedIn network. I still have no idea if anyone has read it.

Although security.txt is not yet an official Internet standard as approved by the Internet Engineering Task Force (IETF), its basic principles have so far been adopted by at least eight percent of the Fortune 100 companies. According to a review of the domain names for the latest Fortune 100 firms via, those include Alphabet, Amazon, Facebook, HCA Healthcare, Kroger, Procter & Gamble, USAA and Walmart.

There may be another good reason for consolidating security contact and vulnerability reporting information in one, predictable place. Alex Holden, founder of the Milwaukee-based consulting firm Hold Security, said it’s not uncommon for malicious hackers to experience problems getting the attention of the proper people within the very same organization they have just hacked.

“In cases of ransom, the bad guys try to contact the company with their demands,” Holden said. “You have no idea how often their messages get caught in filters, get deleted, blocked or ignored.”


So if security.txt is so great, why haven’t more organizations adopted it yet? It seems that setting up a security.txt file tends to invite a rather high volume of spam. Most of these junk emails come from self-appointed penetration testers who — without any invitation to do so — run automated vulnerability discovery tools and then submit the resulting reports in hopes of securing a consulting engagement or a bug bounty fee.

This dynamic was a major topic of discussion in these Hacker News threads on security.txt, wherein a number of readers related their experience of being so flooded with low-quality vulnerability scan reports that it became difficult to spot the reports truly worth pursuing further.

Edwin “EdOverflow” Foudil, the co-author of the proposed notification standard, acknowledged that junk reports are a major downside for organizations that offer up a security.txt file.

“This is actually stated in the specification itself, and it’s incredibly important to highlight that organizations that implement this are going to get flooded,” Foudil told KrebsOnSecurity. “One reason bug bounty programs succeed is that they are basically a glorified spam filter. But regardless of what approach you use, you’re going to get inundated with these crappy, sub-par reports.”

Often these sub-par vulnerability reports come from individuals who have scanned the entire Internet for one or two security vulnerabilities, and then attempted to contact all vulnerable organizations at once in some semi-automated fashion. Happily, Foudil said, many of these nuisance reports can be ignored or grouped by creating filters that look for messages containing keywords commonly found in automated vulnerability scans.

Foudil said despite the spam challenges, he’s heard tremendous feedback from a number of universities that have implemented security.txt.

“It’s been an incredible success with universities, which tend to have lots of older, legacy systems,” he said. “In that context, we’ve seen a ton of valuable reports.”

Foudil says he’s delighted that eight of the Fortune 100 firms have already implemented security.txt, even though it has not yet been approved as an IETF standard. When and if security.txt is approved, he hopes to spend more time promoting its benefits.

“I’m not trying to make money off this thing, which came about after chatting with quite a few people at DEFCON [the annual security conference in Las Vegas] who were struggling to report security issues to vendors,” Foudil said. “The main reason I don’t go out of my way to promote it now is because it’s not yet an official standard.”

Has your organization considered or implemented security.txt? Why or why not? Sound off in the comments below.

Planet DebianJamie McClelland: Putty Problems

I upgraded my first servers from buster to bullseye over the weekend and it went very smoothly, so big thank you to all the debian developers who contributed your labor to the bullseye release!

This morning, however, I hit a snag when the first windows users tried to login. It seems like a putty bug (see update below).

First, the user received an error related to algorithm selection. I didn’t record the exact error and simply suggested that the user upgrade.

Once the user was running the latest version of putty (0.76), they received a new error:

Server refused public-key signature despite accepting key!

I turned up debugging on the server and recorded:

Sep 20 13:10:32 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:32 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:32 container001 sshd[1647842]: Postponed publickey for XXXXXXXXX from port 63579 ssh2 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: userauth-request for user XXXXXXXXX service ssh-connection method publickey [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: attempt 2 failures 0 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: temporarily_use_uid: 1000/1000 (e=0/0)
Sep 20 13:10:33 container001 sshd[1647842]: debug1: trying public key file /home/XXXXXXXXX/.ssh/authorized_keys
Sep 20 13:10:33 container001 sshd[1647842]: debug1: fd 5 clearing O_NONBLOCK
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: matching key found: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Sep 20 13:10:33 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:33 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:33 container001 sshd[1647842]: debug1: auth_activate_options: setting new authentication options
Sep 20 13:10:33 container001 sshd[1647842]: Failed publickey for XXXXXXXXX from port 63579 ssh2: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:39 container001 sshd[1647514]: debug1: Forked child 1648153.
Sep 20 13:10:39 container001 sshd[1648153]: debug1: Set /proc/self/oom_score_adj to 0
Sep 20 13:10:39 container001 sshd[1648153]: debug1: rexec start in 5 out 5 newsock 5 pipe 8 sock 9
Sep 20 13:10:39 container001 sshd[1648153]: debug1: inetd sockets after dupping: 4, 4

The server log seems to agree with the client returned message: first the key was accepted, then it was refused.

We re-generated a new key. We turned off the windows firewall. We deleted all the putty settings via the windows registry and re-set them from scratch.

Nothing seemed to work. Then, another windows user reported no problem (and that user was running putty version 0.74). So the first user downgraded to 0.74 and everything worked fine.


Wow, very impressed with the responsiveness of putty devs!

And, who knew that putty is available in debian??

Long story short: putty version 0.76 works on linux and, from what I can tell, works for everyone except my one user. Maybe it’s their provider doing some filtering? Maybe a nuance to their version of Windows?

Planet DebianAndy Simpkins: COVID-19

Nearly 4 weeks after contracting COVID-19 I am finally able to return to work…

Yes I have had both Jabs (my 2nd dose was back in June), and this knocked me for six. I spent most of the time in bed, and only started to get up and about 10 days ago.

I passed this on to both my wife and daughter (my wife has also been double jabbed), fortunately they didn’t get it as bad as me and have been back at work / school for the last week. I also passed it on to a friend at the UK Debian BBQ, hosted once again by Sledge and Randombird, before I started showing symptoms. Fortunately (after a lot of PCR tests for attendees) it doesn’t look like I passed it to anyone else

I wouldn’t wish this on anyone.

I went on holiday back in August (still in England) thinking that having both jabs we would be fine. We stayed in self catering accommodation and we spent our time outside, we visited open air museums, walked around gardens etc, however we did eat out in relatively empty pubs and restaurants.

And yes we did all have face masks on when we went indoors (although obviously we removed them whilst eating).

I guess that is when I caught this, but have no idea exactly when or where.

Even after vaccination, it is still possible to both catch and spread this virus. Fortunately having been vaccinated my resulting illness was (statistically) less bad than it would otherwise have been.

I dread to think how bad I would have been if I had not already been vaccinated, I suspect that I would have ended up in ICU.  I am still very tired, and have been told it may take many more weeks to get back to my former self. Other than being overweight, prior to this I have been in good health.

If you are reading this and have NOT yet had a vaccine and one is available for you, please, please get it done.

Kevin RuddNPR: Kevin Rudd Discusses Consequences of U.S.-Australian Sub Deal


So what are the implications of that nuclear submarine deal we mentioned that has upset France? Kevin Rudd is the former prime minister of Australia, which is buying nuclear submarines from the United States. He is also the president of the Asia Society Policy Institute and is on the line from Australia. Welcome back.

KEVIN RUDD: Good to be with you.

INSKEEP: Do you have any idea why it would be that neither your government nor the U.S. government nor the U.K. let France know this deal was happening?

RUDD: Let me put it this – to you delicately. I think there have been finer moments in the history of Anglo-American and Anglo-Australian diplomacy. Leaving our French allies in the dark, frankly, was just dumb. And frankly, if there was a technical reason for changing the Australian submarine order from conventional boats to nuclear-powered boats, which is a big decision for this country, then surely the French, as a nuclear submarine country themselves, could have been also extended the opportunity to retender for what at present is a $90 billion price project. So the French have every right to be annoyed by what has happened. And I think this could have been handled infinitely better.

INSKEEP: And you don’t know why they just didn’t? I mean, did they just forget, or did they think it was smarter to do it this way, somehow?

RUDD: I presume that part of the politics of this was driven by – from the Australian end. Australia is getting close to a national election. And the conservative government of Australia at present is trying to muscle up and appear to be hairy chested on the question of China, taking an extraordinary decision, from a local perspective, to go from conventional submarines to nuclear-powered submarines, when this country doesn’t have its own civil nuclear program is a very large leap into the dark. I presume they wish to have the element of surprise in it. And their principal objective domestically in Australia was to catch their political opponents offside. The Australian Labor Party, my party, is currently well ahead in the polls.

INSKEEP: I guess we should just clarify – the difference between a conventional submarine and a nuclear-powered submarine is how long it can stay underwater. A nuclear submarine can stay below and stay hidden for a much longer period. Can you just give us an idea of – what is the point? Why does Australia need that capability?

RUDD: Well, these are the questions which now surface in the public debate here as to why the sudden change. There are really three questions which come to the surface. One is a nuclear-powered submarine’s supposed to be quieter. That’s less detectable, in terms of what submariners would describe as the signature of a submarine. Now, the conventional wisdom in the past is that conventionally powered submarines are, in fact, quieter. But now that advice seems to be changing. But we don’t have consensus on that. The second is how often you need to snorkel – that is, come to the surface – and become more detectable because of that. But the third is a question of interoperability, and that is, if you’re going to have eight or 12 Australian nuclear-powered submarines, are you, in effect, turning them into a subunit of the United States Navy? Or is it going to be still an autonomous Royal Australian Navy? ‘Cause we can’t service nuclear-powered vessels ourselves ’cause we don’t have a domestic nuclear program. These are the three big questions which need to be clarified from the Australian government.

INSKEEP: Oh, that’s interesting. So Australia becomes, in a way, more dependent on the United States, which of course has a fully developed nuclear program and a lot of experience with nuclear subs. Let me ask about where all this is heading, though, because when we talk about Australia buying weapons and say it has something to do with countering China, you begin imagining some scenario where the United States, the U.K. and Australia would somehow all end up in a war against China, which, given that China has nuclear weapons, is almost unthinkable. Is that where this is headed or what people at least want to be prepared for?

RUDD: Well, it’s – the core structural factor at work here, of course, as your question rightly points to, is the rise of China. And China, bit by bit – economically, militarily, strategically, technologically – is changing the nature of the balance of power between itself and the United States, in East Asia and in the West Pacific. That’s been going on for decades. So the real question for the U.S. and its allies – its allies in Asia and its allies in Europe – is how then best to respond to it. Now, of course, there are two or three bits to that. One, of course, is to maintain or to sustain or to enhance that military balance of power, which has been slowly moving in China’s direction for some time.

The second, however, is what I describe as the relative diplomatic footprint in this part of the world by the U.S. and China, where, frankly – in Southeast Asia – particularly during the Trump administration, the United States has been missing in action. But the big one is this. It’s trade investment of the economy, where all the economies of East Asia and the West Pacific now have China as their No. 1 economic partner – and the United States no longer. So this goes to the question of, will the U.S. re-engage economically? Will the U.S., for example, reconsider its accession to the Trans-Pacific Partnership?

INSKEEP: Oh, yeah.

RUDD: Questions such as that. If you’re not in the economic game, then frankly, the general strategy towards China is problematic.

INSKEEP: Do Australians view China roughly as the United States does?

RUDD: I think Australians have, on balance, a more mixed view of China than I find in United States. I normally run our think tank in New York. I’m back in Australia for COVID reasons. But certainly, the changing balance of power in China’s direction, the more assertive policy of Xi Jinping’s administration over the last several years and the aspects of coercive commercial diplomacy against Australia…


RUDD: …Have really hardened Australian attitudes towards the People’s Republic. At the same time, you’ve got to ask yourself this question, whether it’s on submarine purchase or anything else. What is the most effective, as it were, national and allied strategy for dealing with China, not just militarily but economically and other domains as well?

INSKEEP: Former Prime Minister Kevin Rudd of Australia – it’s always a pleasure talking with you, sir. Thank you so much.

RUDD: Good to be with you.

INSKEEP: He’s also president of the Asian Society Policy Institute.

The post NPR: Kevin Rudd Discusses Consequences of U.S.-Australian Sub Deal appeared first on Kevin Rudd.

Planet DebianHolger Levsen: 20210920-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, we still have free beds

We still have some free slots and beds available for the "Debian Reunion Hamburg 2021" taking place in Hamburg at the venue of the 2018 & 2019 MiniDebConfs from Monday, Sep 27 2021 until Friday Oct 1 2021, with Sunday, Sep 26 2021 as arrival day.

So again, Debian people will meet in Hamburg. The exact format is less defined and structured than previous years, probably we will just be hacking from Monday to Wednesday, have talks on Thursday and a nice day trip on Friday.

Please read and if you intend to attend, please register there. If additionally you would like to stay on site (in a single room or shared with one another person), please mail me.

I'm looking forward to this event, even though (or maybe also because) it will be much smaller than last years. I suppose this will lead to more personal interactions and more intense hacking, though of course it is to be seen how this works out exactly!

Planet DebianJunichi Uekawa: podman build (user namespace) and Rename.

podman build (user namespace) and Rename. It seems like Debian bullseye, if I run podman, it runs in user namespace mode if you run it inside a regular user. That's fine, but it uses fuse overlayfs driver. Now I am yet to pinpoint what is happening, but rename() is handled in a weird way, I think it's broken. os.Rename() in golang is copying the file and not deleting the original file.

Worse Than FailureCodeSOD: Expiration Dates

Last week, we saw some possibly ancient Pascal code. Leilani sends us some more… modern Pascal to look at today.

This block of code comes from a developer who has… some quirks. For example, they have a very command-line oriented approach to design. This means that, even when making a GUI application, they want convenient keyboard shortcuts. So, to close a dialog, you hit "CTRL+C", because who would ever use that keyboard shortcut for any other function at all? There's no reason a GUI would use "CTRL+C" for anything but closing windows.

But that's not the WTF.

procedure TReminderService.DeactivateExternalusers; var sTmp: String; begin // Main Site if not dbcon.Connected then dbcon.Connect; if not trUpdate.Active then trUpdate.StartTransaction; qryUsersToDeactivate.Close; sTmp := DateTimeToStr(Now); sTmp := Copy(sTmp, 1, 10) + ' 00:00:00'; qryUsersToDeactivate.SQL.Text := 'Select ID, "NAME", ENABLED, STATUS, SITE, EXPIRATION '+ 'from EXTERNAL_USERS ' + 'where ENABLED=1 and EXPIRATION<:EXPIRED'; qryUsersToDeactivate.ParamByName('EXPIRED').AsDateTime := StrToDateTime(sTmp); qryUsersToDeactivate.Open; while not qryUsersToDeactivate.Eof do begin qryUsersToDeactivate.Edit; qryUsersToDeactivate.FieldByName('ENABLED').AsInteger := 0; qryUsersToDeactivate.Post; qryUsersToDeactivate.Next; end; if trUpdate.Active then trUpdate.Commit; // second Site // same code which does the same in another database end;

This code queries EXTERNAL_USERS to find all the ENABLED accounts which are past their EXPIRATION date. It then loops across each row in the resulting cursor, updates the ENABLED field to 0, and then Posts that change back to the database, which performs the appropriate UPDATE. So much of this code could be replaced with a much simpler, and faster: UPDATE EXTERNAL_USERS SET ENABLED = 0 WHERE ENABLED = 1 AND EXPIRATION < CURRENT_DATE.

But then we wouldn't have an excuse to do all sorts of string manipulation on dates to munge together the current date in a format which works for the database- except Leilani points out that the way this string munging actually happens means "that only works when the system uses the german date format." Looking at this code, I'm not entirely sure why that is, but I assume it's buried in those StrToDateTime/DateTimeToStr functions.

Given that they call qryUsersToDeactivate.Close at the top, this implies that they don't close it when they're done, which tells us that this opens a cursor and just leaves it open for some undefined amount of time. It's possible that the intended "close at the end" was just elided by the submitter, but the fact that it might be open at the top tells us that even if they do close it, they don't close it reliably enough to know that it's closed at the start.

And finally, for someone who likes to break the "copy text" keyboard shortcut, this code repeats itself. While the details have been elided by the submitter // same code which does the same in another database tells us all we need to know about what comes next.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Planet DebianBen Hutchings: Debian LTS work, August 2021

In August I was assigned 13.25 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from earlier months. I worked 1.25 hours and will carry over the remainder.

I attended an LTS team meeting, and wrote my report for July 2021, but did not work on any updates.


Planet DebianMike Gabriel: X2Go, Remmina and X2GoKdrive

In this blog post, I will cover a few related but also different topics around X2Go - the GNU/Linux based remote computing framework.

Introduction and Catch Up

For those, who haven't come across X2Go, so far... With X2Go [0] you can log into remote GNU/Linux machines graphically and launch headless desktop environments, seamless/published applications or access an already running desktop session (on a local Xserver or running as a headless X2Go desktop session) via X2Go's session shadowing / mirroring feature.

Graphical backend: NXv3

For several years, there was only one graphical backend available in X2Go, the NXv3 software. In NXv3, you have a headless or nested (it can do both) Xserver that has some remote magic built-in and is able to transfer the Xserver's graphical data to a remote client (NX proxy). Over the wire, the NX protocol allows for data compression (JPEG, PNG, etc.) and combines it with bitmap caching, so that the overall result is a fast and responsive desktop experience even on low latency and low bandwidth connections. This especially applies to X desktop environments that use many native X protocol operations for drawing windows and widget onto the screen. The more bitmaps involved (e.g. in applications with client-side rendering of window controls and such), the worse the quality of a session experience.

The current main maintainer of NVv3 (aka nx-libs [1]) is Ulrich Sibiller. Uli has my and the X2Go community's full appreciation, admiration and gratitude for all the work he does on nx-libs, constantly improving NXv3 without breaking compatibility with legacy use cases (yes, FreeNX is still alive, by the way).

NEW: Alternative Graphical Backend: X2Go Kdrive

Over the past 1.5 years, Oleksandr Shneyder (Alex), co-founder of X2Go, has been working on a re-implementation of an alternative, less X11-dependent graphical backend. The underlying Xserver technology is the kdrive part of the server project. People on GNU/Linux might have used kdrive technology already: The Xephyr nested Xserver uses the kdrive implementation.

The idea of the X2Go Kdrive [2] implementation in X2Go is providing a headless Xserver on the X2Go Server side for running X11 based desktop sessions inside while using an X11-agnostic data protocol for sending the graphical desktop data to the client-side for rendering. Whereas, with NXv3 technology, you need a local Xserver on the client side, with X2Go Kdrive you only need a client app(lication) that can draw bitmaps into some sort of framebuffer, such as a client-side X11 Xserver, a client-side Wayland compositor or (hold your breath) an HTMLv5 canvas in a web browser.

X2Go Kdrive Client Implementations

During first half of this year, I tested and DEB-packaged Alex's X2Go HTMLv5 client code [3] and it has been available for testing in the X2Go nightly builds archive for a while now.

Of course, the native X2Go Client application has X2Go Kdrive support for a while, too, but it requires a Qt5 application in the background, the x2gokdriveclient (which is still only available in X2Go nightly builds or from X2Go Git [4]).

X2Go and Remmina

As currently posted by the Remmina community [5], one of my employees has been working on finalizing an already existing draft of mine for the last couple of months: Remmina Plugin X2Go. This project has been contracted by BAUR-ITCS UG (haftungsbeschränkt) already a while back and has been financed via X2Go funding from one of their customers. Unfortunately, I never got around really to finalizing the project. Apologies for this.

Daniel Teichmann, who has been in the company for a while now, but just recently switched to an employment model with considerably more work hours per week, now picked up this project two months ago and achieved awesome things on the way.

Daniel Teichmann and Antenore Gatta (Remmina core developer, aka tmow) have been cooperating intensely on this, recently, with the objective of getting the X2Go plugin code merged into Remmina asap. We are pretty close to the first touchdown (i.e. code merge) of this endeavour.

Thanks to Antenore for his support on this. This is much appreciated.

Remmina Plugin X2Go - Current Challenges

The X2Go Plugin for Remmina implementation uses Python X2Go (PyHoca-CLI) under the bonnet and basically does a system call to pyhoca-cli according to the session settings configured in the Remmina session profile UI. When using NXv3 based sessions, the session window appears on the client-side Xserver and immediately gets caught by Remmina and embedded into the Remmina frame (via Xembed protocol) where its remote sessions are supposed to appear. (Thanks that GtkSocket is still around in GTK-3). The knowing GTK-3 experts among you may have noticed: GtkSocket is obsolete and has been removed from GTK-4. Also, GtkSocket support is only available in GTK-3 when using its X11 rendering backend.

For the X2Go Kdrive implementation, we tested a similar approach (embedding the x2gokdriveclient Qt5 window via Xembed/GtkSocket), but it seems that GtkSocket and Qt5 applications don't work well together and we did not succeed in embedding the Qt5 window of the x2gokdriveclient application into Remmina, so far. Also, this would be a work-around for the bigger problem: We want, long-term, provide X2Go Kdrive support in Remmina, not only for Remmina running with GTK-3/X11, but also when Remmina is used natively on top of Wayland.

So, the more sustainable approach for showing an X2Go Kdrive based X2Go session in Remmina would be a GTK-3/4 or a Glib-2.0 + Cairo based rendering client provided as a shared library. This then could be used by Remmina for drawing the session bitmaps into the Remmina session frame.

This would require a port of the x2gokdriveclient Qt code into a non-Qt implementation. However, we are running out of funding to make this happen at the moment.

More Funding Needed for this Journey

As you might guess, such a project as proposed is a project that some people do in their spare time, others do it for a living.

I'd love to continue this project and have Daniel Teichmann continue his work on this, so that Remmina might soon be able to provide native X2Go Kdrive Client support.

If people read this and are interested in supporting such a project, please get in touch [6]. Thanks so much!

Mike (aka sunweaver)


Kevin RuddSMH: Morrison’s China ‘strategy’ makes us less, not more, secure

Every now and then, it’s useful to stop and ask the basic questions. Questions like: How do submarines actually contribute to our national security? And now, it seems, nuclear-powered submarines at that.

The fundamental national security responsibilities of any government are to maintain our territorial integrity, political sovereignty and economic prosperity from external aggression. In Australia’s case, submarines form a critical part of a Defence Force designed to deter, disrupt or defeat military threats to our country.

When the Labor government I led prepared the 2009 Defence White Paper, we applied these disciplines to the challenges we saw for our national security to 2030. It was the first time since the 1960s that a white paper had named China as an emerging strategic challenge, for which the Liberals attacked me as an old “Cold War Warrior”. I made no apology despite Beijing’s deep objections.

Based on Defence advice, we agreed to double the conventional submarine fleet to 12 boats, increase the surface fleet by a third, and proceed with the acquisition of up to 100 Joint Strike Fighters.

Over the past eight years, however, this vital defence replacement project has ground to a halt as the Abbott-Turnbull-Morrison government – and their six defence ministers along the way – flip-flopped between Japanese, French and now unspecified Anglo-American suppliers. The result: not a single keel laid, up to $4 billion wasted, and the deep alienation of our Japanese and French strategic partners. It has been an essay in financial waste, national security policy incompetence and egregious foreign policy mismanagement.

France, with whom I initiated the Australia-France strategic co-operation framework in 2011, is right to be outraged at how it has been dumped as our submarine supplier. And US President Joe Biden is under attack in America for excluding Paris and Ottawa from the new, so-called AUKUS defence technology agreement between Australia, Britain and the US, which in the eyes of the world looks a little like the return of the Raj. Well done, Scott Morrison!

So why the decision to turn a 12-year-old bipartisan strategy on its head, build eight nuclear-powered submarines instead, and announce it in the lead-up to a federal election?

The first reason given is “China”, as if this is somehow a self-evident truth. But China has been a core factor in our defence planning since 2009. Certainly, China has become increasingly assertive over the past decade and now rivals the US militarily in the Western Pacific. But these trend lines were clearly articulated in our 2009 white paper which the Liberals ridiculed and which Abbott ignored in his headlong rush to impress Beijing.

The second is that nuclear submarines can remain underwater indefinitely whereas their conventional cousins must “snorkel” regularly, making it easier to detect them. But once again, that was always the case.

Third, we are now told the “signature” (or noise profile) of a conventional sub beneath the surface is much louder and therefore more detectable than for nuclear propulsion. That is strange because we were advised exactly the reverse in 2009.

As for the fourth reason – the argument that America has only now agreed to share its secret nuclear propulsion technology to “that fella down under” (Biden’s description of Morrison as they announced their pact this week) – that’s possibly because we hadn’t asked for it before. And that is because none of the factors listed above had given us a need to. So I’m not entirely sold on that one either.

Finally, there’s the loose language on “interoperability” between the submarine fleets of the three AUKUS navies. This is where Morrison needs to ’fess up: is this code for being interoperable with the Americans in the Taiwan Straits, the South China Sea or even the East China Sea in China’s multiple unresolved territorial disputes with its neighbours? If so, this is indeed a slippery slope to a pre-commitment to becoming an active belligerent against China in a future war that would rival the Pacific War of 1941-45 in its destructive scale.

That would be a radical departure from longstanding, bipartisan Australian policy of not making any such commitment in advance, simply because the precise strategic circumstances in each theatre in the future are unknown and unpredictable. That, by the way, is why the US maintains a policy of deliberate ambiguity over its future military commitment to Taiwan.

So of all these five “reasons” for changing our submarine strategy, the only one that is possibly persuasive is whether the technical advice on the “signature” of conventional boats has significantly changed. But that does not validate the other four factors advanced or, at least, hinted at, since Thursday.

That’s why Anthony Albanese, as the country’s alternative prime minister, is right to insist on total transparency on the full range of nuclear policy, operational deployment and financial implications for Australia before giving his full support.

The uncomfortable truth about this government, as with John Howard over the invasion of Iraq, is that national security policy has long been the extension of domestic politics by other means.

Get ready for a two-pronged Coalition election strategy. First, despite its quarantine and vaccine failures being responsible for lockdowns, wait for Morrison to declare “freedom day” against more cautious states that will be depicted as the enemy within.

And second, in an attempt to distract the Australian public and look hairy-chested, the message will be of a government readying the nation to defend itself against the enemy from without – namely China, something those closet pinkos from the Labor Party would never do. It will have third-rate, Crosby Textor campaign spin written all over it.

The appalling irony is that Morrison is actually making Australia less secure, not more secure. Notwithstanding the difficulty, dangers and complexity of the China challenge for all of America’s allies, by routinely labelling China as public enemy No. 1, Morrison runs the grave risk of turning China into one.

For an effective national strategy on China, Morrison should talk less and do more. But for Morrison, everything is always about his own domestic politics.

Article originally published in the Sydney Morning Herald on 18 September 2021.



The post SMH: Morrison’s China ‘strategy’ makes us less, not more, secure appeared first on Kevin Rudd.


Cryptogram Friday Squid Blogging: Ram’s Horn Squid Shells

You can find ram’s horn squid shells on beaches in Texas (and presumably elsewhere).

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.5 on CRAN: More Protect’ion

Another small release of the tidyCpp package arrived on CRAN overnight. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.

The Protect class now uses the default methods for copy and move constructors and assignment allowing for wide use of the class. The small NumVec class now uses it for its data member.

The NEWS entry (which I failed to update for the releases) follows.

Changes in tidyCpp version 0.0.5 (2021-09-16)

  • The Protect class uses default copy and move assignments and constructors

  • The data object in NumVec is now a Protect object

Thanks to my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: In Other Words

We generally don't like to make fun of innocent misuses of a second language. Many of us struggle with their first. But sometimes we honestly can't tell which is first and which is zeroeth.

Whovian stombaker pontificates "Internationalization is hard. Sometimes, some translations are missing, some other times, there are strange concatenations due to language peculiarities. But here, we have everything wrong and no homogeneity in the issues."



Likewise Sean F. wonders "How similar?"



Mathematician Mark G. figures "I'm not sure that's how percent works, but thanks for the alert."



Job-hunter Antoinio has been whiteboarded before, but never quite like this. "I was applying at IBM. I must agree before continuing... To what?"



Experienced Edward explains "I've been a software engineer for 12 years, and still I have no idea how they accomplished this."



I'm sure Mark G. will agree: that about sums it up.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityTrial Ends in Guilty Verdict for DDoS-for-Hire Boss

A jury in California today reached a guilty verdict in the trial of Matthew Gatrel, a St. Charles, Ill. man charged in 2018 with operating two online services that allowed paying customers to launch powerful distributed denial-of-service (DDoS) attacks against Internet users and websites. Gatrel’s conviction comes roughly two weeks after his co-conspirator pleaded guilty to criminal charges related to running the services.

The user interface for Downthem[.]org.

Prosecutors for the Central District of California charged Gatrel, 32, and his business partner Juan “Severon” Martinez of Pasadena, Calif. with operating two DDoS-for-hire or “booter” services — downthem[.]org and ampnode[.]com.

Despite admitting to FBI agents that he ran these booter services (and turning over plenty of incriminating evidence in the process), Gatrel opted to take his case to trial, defended the entire time by public defenders. Facing the prospect of a hefty sentence if found guilty at trial, Martinez pleaded guilty on Aug. 26 to one count of unauthorized impairment of a protected computer.

Gatrel was convicted on all three charges of violating the Computer Fraud and Abuse Act, including conspiracy to commit unauthorized impairment of a protected computer, conspiracy to commit wire fraud, and unauthorized impairment of a protected computer.

Investigators say Downthem helped some 2,000 customers launch debilitating digital assaults at more than 200,000 targets, including many government, banking, university and gaming Web sites.

Prosecutors alleged that in addition to running and marketing Downthem, the defendants sold huge, continuously updated lists of Internet addresses tied to devices that could be used by other booter services to make attacks far more powerful and effective. In addition, other booter services also drew firepower and other resources from Ampnode.

Booter and stresser services let customers pick from among a variety of attack methods, but almost universally the most powerful of these methods involves what’s known as a “reflective amplification attack.” In such assaults, the perpetrators leverage unmanaged Domain Name Servers (DNS) or other devices on the Web to create huge traffic floods.

Ideally, DNS servers only provide services to machines within a trusted domain — such as translating an Internet address from a series of numbers into a domain name, like But DNS reflection attacks rely on consumer and business routers and other devices equipped with DNS servers that are (mis)configured to accept queries from anywhere on the Web.

Attackers can send spoofed DNS queries to these DNS servers, forging the request so that it appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (target) address.

The bad guys also can amplify a reflective attack by crafting DNS queries so that the responses are much bigger than the requests. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This “amplification” effect is especially pronounced if the perpetrators query dozens of DNS servers with these spoofed requests simultaneously.

The government charged that Gatrel and Martinez constantly scanned the Internet for these misconfigured devices, and then sold lists of Internet addresses tied to these devices to other booter service operators.

Gatrel’s sentencing is scheduled for January 27, 2022. He faces a statutory maximum sentence of 35 years in federal prison. However, given the outcome of past prosecutions against other booter service operators, it seems unlikely that Gatrel will spend much time in jail.

The case against Gatrel and Martinez was brought as part of a widespread crackdown on booter services in Dec. 2018, when the FBI joined with law enforcement partners overseas to seize 15 different booter service domains.

Federal prosecutors and DDoS experts interviewed at the time said the operation had three main goals: To educate people that hiring DDoS attacks is illegal, to destabilize the flourishing booter industry, and to ultimately reduce demand for booter services.

The jury is still out on whether any of those goals have been achieved with lasting effect.

The original complaint against Gatrel and Martinez is here (PDF).

Planet DebianReproducible Builds (diffoscope): diffoscope 184 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 184. This version includes the following changes:

[ Chris Lamb ]
* Fix the semantic comparison of R's .rdb files after a refactoring of
  temporary directory handling in a previous version.
* Support a newer format version of R's .rds files.
* Update tests for OCaml 4.12. (Closes: reproducible-builds/diffoscope#274)
* Move diffoscope.versions to diffoscope.tests.utils.versions.
* Use assert_diff in tests/comparators/
* Reformat various modules with Black.

[ Zbigniew Jędrzejewski-Szmek ]
* Stop using the deprecated distutils module by adding a version
  comparison class based on the RPM version rules.
* Update invocations of llvm-objdump for the latest version of LLVM.
* Adjust a test with one-byte text file for file(1) version 5.40.
* Improve the parsing of the version of OpenSSH.

[ Benjamin Peterson ]
* Add a --diff-context option to control the unified diff context size.

You find out more by visiting the project homepage.


Cryptogram Zero-Click iMessage Exploit

Citizen Lab released a report on a zero-click iMessage exploit that is used in NSO Group’s Pegasus spyware.

Apple patched the vulnerability; everyone needs to update their OS immediately.

News articles on the exploit.

Planet DebianChris Lamb: On Colson Whitehead's Harlem Shuffle

Colson Whitehead's latest novel, Harlem Shuffle, was always going to be widely reviewed, if only because his last two books won Pulitzer prizes. Still, after enjoying both The Underground Railroad and The Nickel Boys, I was certainly going to read his next book, regardless of what the critics were saying — indeed, it was actually quite agreeable to float above the manufactured energy of the book's launch.

Saying that, I was encouraged to listen to an interview with the author by Ezra Klein. Now I had heard Whitehead speak once before when he accepted the Orwell Prize in 2020, and once again he came across as a pretty down-to-earth guy. Or if I were to emulate the detached and cynical tone Whitehead embodied in The Nickel Boys, after winning so many literary prizes in the past few years, he has clearly rehearsed how to respond to the cliched questions authors must be asked in every interview. With the obligatory throat-clearing of 'so, how did you get into writing?', for instance, Whitehead replies with his part of the catechism that 'It seemed like being a writer could be a cool job. You could work from home and not talk to people.' The response is the right combination of cute and self-effacing... and with its slight tone-deafness towards enforced isolation, it was no doubt honed before Covid-19.


Harlem Shuffle tells three separate stories about Ray Carney, a furniture salesman and 'fence' for stolen goods in New York in the 1960s. Carney doesn't consider himself a genuine criminal though, and there's a certain logic to his relativistic morality. After all, everyone in New York City is on the take in some way, and if some 'lightly used items' in Carney's shop happened to have had 'previous owners', well, that's not quite his problem. 'Nothing solid in the city but the bedrock,' as one character dryly observes. Yet as Ezra pounces on in his NYT interview mentioned abov, the focus on the Harlem underworld means there are very few women in the book, and Whitehead's circular response — ah well, it's a book about the criminals at that time! — was a little unsatisfying. Not only did it feel uncharacteristically slippery of someone justly lauded for his unflinching power of observation (after all, it was the author who decided what to write about in the first place), it foreclosed on the opportunity to delve into why the heist and caper genres (from The Killing, The Feather Thief, Ocean's 11, etc.) have historically been a 'male' mode of storytelling.

Perhaps knowing this to be the case, the conversation quickly steered towards Ray Carney's wife, Elizabeth, the only woman in the book who could be said possesses some plausible interiority. The following off-hand remark from Whitehead caught my attention:

My wife is convinced that [Elizabeth] knows everything about Carney's criminal life, and is sort of giving him a pass. And I'm not sure if that's true. I have to have to figure out exactly what she knows and when she knows it and how she feels about it.

I was quite taken by this, although not simply due to its effect on the story it self. As in, it immediately conjured up a charming picture of Whitehead's domestic arrangements: not only does Whitehead's wife feel free to disagree with what one of Whitehead's 'own' characters knows or believes, but that Colson has no problem whatsoever sharing that disagreement with the public at large. (It feels somehow natural that Whitehead's wife believes her counterpart knows more than she lets on, whilst Whitehead himself imbues the protagonist's wife with a kind of neo-Victorian innocence.) I'm minded to agree with Whitehead's partner myself, if only due to the passages where Elizabeth is studiously ignoring Carney's otherwise unexplained freak-outs.

But all of these meta-thoughts simply underline just how emancipatory the Death of the Author can be. This product of academic literary criticism (the term was coined by Roland Barthes' 1967 essay of the same name) holds that the original author's intentions, ideas or biographical background carry no especial weight in determining how others should interpret their work. It is usually understood as meaning that a writer's own views are no more valid or 'correct' than the views held by someone else. (As an aside, I've found that most readers who encounter this concept for the first time have been reading books in this way since they were young. But the opposite is invariably true with cinephiles, who often have a bizarre obsession with researching or deciphering the 'true' interpretation of a film.) And with all that in mind, can you think of a more wry example of how freeing (and fun) nature of the Death of the Author than an author's own partner dissenting with their (Pulitzer Prize-winning) husband on the position of a lynchpin character?


The 1964 Harlem riot began after James Powell, a 15-year-old African American, was shot and killed by Thomas Gilligan, an NYPD police officer in front of 10s of witnesses. Gilligan was subsequently cleared by a grand jury.

As it turns out, the reviews for Harlem Shuffle have been almost universally positive, and after reading it in the two days after its release, I would certainly agree it is an above-average book. But it didn't quite take hold of me in the way that The Underground Railroad or The Nickel Boys did, especially the later chapters of The Nickel Boys that were set in contemporary New York and could thus make some (admittedly fairly explicit) connections from the 1960s to the present day — that kind of connection is not there in Harlem Shuffle, or at least I did not pick up on it during my reading.

I can see why one might take exception to that, though. For instance, it is certainly true that the week-long Harlem Riot forms a significant part of the plot, and some events in particular are entirely contingent on the ramifications of this momentous event. But it's difficult to argue the riot's impact are truly integral to the story, so not only is this uprising against police brutality almost regarded as a background event, any contemporary allusion to the murder of George Floyd is subsequently watered down. It's nowhere near the historical rubbernecking of Forrest Gump (1994), of course, but that's not a battle you should ever be fighting.

Indeed, whilst a certain smoothness of affect is to be priced into the Whitehead reading experience, my initial overall reaction to Harlem Shuffle was fairly flat, despite all the action and intrigue on the page. The book perhaps belies its origins as a work conceived during quarantine — after all, the book is essentially comprised of three loosely connected novellas, almost as if the unreality and mental turbulence of lockdown prevented the author from performing the psychological 'deep work' of producing a novel-length text with his usual depth of craft. A few other elements chimed with this being a 'lockdown novel' as well, particularly the book's preoccupation with the sheer physicality of the city compared to the usual complex interplay between its architecture and its inhabitants. This felt like it had been directly absorbed into the book from the author walking around his deserted city, and thus being able to take in details for the first time:

The doorways were entrances into different cities—no, different entrances into one vast, secret city. Ever close, adjacent to all you know, just underneath. If you know where to look.

And I can't fail to mention that you can almost touch Whitehead's sublimated hunger to eat out again as well:

Stickups were chops—they cook fast and hot, you’re in and out. A stakeout was ribs—fire down low, slow, taking your time.


Sometimes when Carney jumped into the Hudson when he was a kid, some of that stuff got into his mouth. The Big Apple Diner served it up and called it coffee.

More seriously, however, the relatively thin personalities of minor characters then reminded me of the simulacrum of Zoom-based relationships, and the essentially unsatisfactory endings to the novellas felt reminiscent of lockdown pseudo-events that simply fizzle out without a bang. One of the stories ties up loose ends with: 'These things were usually enough to terminate a mob war, and they appeared to end the hostilities in this case as well.' They did? Well, okay, I guess.


The corner of 125th Street and Morningside Avenue in 2019, the purported location of Carney's fictional furniture store. Signage plays a prominent role in Harlem Shuffle, possibly due to the author's quarantine walks.

Still, it would be unfair to characterise myself as 'disappointed' with the novel, and none of this piece should be taken as really deep criticism. The book certainly was entertaining enough, and pretty funny in places as well:

Carney didn’t have an etiquette book in front of him, but he was sure it was bad manners to sit on a man’s safe.


The manager of the laundromat was a scrawny man in a saggy undershirt painted with sweat stains. Launderer, heal thyself.

Yet I can't shake the feeling that every book you write is a book that you don't, and so we might need to hold out a little longer for Whitehead's 'George Floyd novel'. (Although it is for others to say how much of this sentiment is the expectations of a White Reader for The Black Author to ventriloquise the pain of 'their' community.)

Some room for personal critique is surely permitted. I dearly missed the junk food energy of the dry and acerbic observations that run through Whitehead's previous work. At one point he had a good line on the model tokenisation that lurks behind 'The First Negro to...' labels, but the callbacks to this idea ceased without any payoff. Similar things happened with the not-so-subtle critiques of the American Dream:

“Entrepreneur?” Pepper said the last part like manure. “That’s just a hustler who pays taxes.”


One thing I’ve learned in my job is that life is cheap, and when things start getting expensive, it gets cheaper still.

Ultimately, though, I think I just wanted more. I wanted a deeper exploration of how the real power in New York is not wielded by individual street hoodlums or even the cops but in the form of real estate, essentially serving as a synecdoche for Capital as a whole. (A recent take of this can be felt in Jed Rothstein's 2021 documentary, WeWork: Or the Making and Breaking of a $47 Billion Unicorn… and it is perhaps pertinent to remember that the US President at the time this novel was written was affecting to be a real estate tycoon.). Indeed, just like the concluding scenes of J. J. Connolly's Layer Cake, although you can certainly pull off a cool heist against the Man, power ultimately resides in those who control the means of production... and a homespun furniture salesman on the corner of 125 & Morningside just ain't that. There are some nods to kind of analysis in the conclusion of the final story ('Their heist unwound as if it had never happened, and Van Wyck kept throwing up buildings.'), but, again, I would have simply liked more.

And when I attempted then file this book away into the broader media landscape, given the current cultural visibility of 1960s pop culture (e.g. One Night in Miami (2020), Judas and the Black Messiah (2021), Summer of Soul (2021), etc.), Harlem Shuffle also seemed like a missed opportunity to critically analyse our (highly-qualified) longing for the civil rights era. I can certainly understand why we might look fondly on the cultural products from a period when politics was less alienated, when society was less atomised, and when it was still possible to imagine meaningful change, but in this dimension at least, Harlem Shuffle seems to merely contribute to this nostalgic escapism.

Worse Than FailureCodeSOD: Subbing for the Subcontractors

Back in the mid-2000s, Maurice got less than tempting offer. A large US city had hired a major contracting firm, that contracting firm had subcontracted out the job, and those subcontractors let the project spiral completely out of control. The customer and the primary contracting firm wanted to hire new subcontractors to try and save the project.

As this was the mid-2000s, the project had started its life as a VB6 project. Then someone realized this was a terrible idea, and decided to make it a VB.Net project, without actually changing any of the already written code, though. That leads to code like this:

Private Function getDefaultPath(ByRef obj As Object, ByRef Categoryid As Integer) As String Dim sQRY As String Dim dtSysType As New DataTable Dim iMPTaxYear As Integer Dim lEffTaxYear As Long Dim dtControl As New DataTable Const sSDATNew As String = "NC" getDefaultPath = False sQRY = "select TAXBILLINGYEAR from t_controltable" dtControl = goDB.GetDataTable("Control", sQRY) iMPTaxYear = dtControl.Rows(0).Item("TAXBILLINGYEAR") 'iMPTaxYear = CShort(cmbTaxYear.Text) If goCalendar.effTaxYearByTaxYear(iMPTaxYear, lEffTaxYear) Then End If sQRY = " " sQRY = "select * from T_SysType where MTYPECODE = '" & sSDATNew & "'" & _ " and msystypecategoryid = " & Categoryid & " and meffstatus = 'A' and " & _ lEffTaxYear & " between mbegTaxYear and mendTaxYear" dtSysType = goDB.GetDataTable("SysType", sQRY) If dtSysType.Rows.Count > 0 Then obj.Text = dtSysType.Rows(0).Item("MSYSTYPEVALUE1") Else obj.Text = "" End If getDefaultPath = True End Function

Indentation as the original.

This function was the culmination of four years of effort on the part of the original subcontractor. The indentation is designed to make this difficult to read- wait, no. That would imply that the indentation was designed. This random collection of spaces makes the code hard to read, so let's get some big picture stuff.

It's called getDefaultpath and returns a String. That seems reasonable, so let's skip down to the return statement, which of course is done in its usual VB6 idiom, where we set the function name equal to the result: getDefaultPath = True Oh… so it doesn't return the path. It returns "True". As a string.

Tracing through, we first query t_controltable to populate iMPTaxYear. Once we have that, we can do this delightful check:

If goCalendar.effTaxYearByTaxYear(iMPTaxYear, lEffTaxYear) Then End If

Then we do some string concatenation to build a new query, and for a change, this is an example that doesn't really open up any SQL injection attacks. All the fields are either numerics or hard-coded constants. It's still awful, but at least it's not a gaping security hole.

That gets us a set of rows from the SysType table, which we can then consume:

If dtSysType.Rows.Count > 0 Then obj.Text = dtSysType.Rows(0).Item("MSYSTYPEVALUE1") Else obj.Text = "" End If

This is our "return" line. You wouldn't know it from the function signature, but obj as Object is actually a textbox. So this function runs a pair of queries against the database to populate a UI element directly with the result.

And this function is just one small example. Maurice adds:

There are 5,188 GOTO statements in 1321 code files. Error handling consists almost entirely of a messagebox, and nowhere did they use Option Strict or Option Explicit.

There's so much horror contained in those two sentences, right there. For those that don't know VisualBasic, Option Strict and Option Explicit are usually enabled by default. Strict forces you to respect types- it won't do any late binding on types, it won't allow narrowing conversions between types. It would prohibit calling obj.Text =… like we see in the example above. Explicit requires you to declare variables before using them.

Now, if you're writing clean code in the first place, Option Strict and Option Explicit aren't truly required- a language like Python, for example, is neither strict no explicit. But a code base like this, without those flags? Madness.

Maurice finishes:

This is but one example from the system. Luckily for the city, what took the subcontractors 4 years to destroy only took us a few months to whip into shape.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Planet DebianSteinar H. Gunderson: plocate in Fedora

It seems that due to the work of Zbigniew Jędrzejewski-Szmek, plocate is now in Fedora Rawhide. This carries a special significance; not only in Fedora an important distribution, but it is also the upstream of mlocate. Thus, an expressed desire to replace mlocate with plocate over the next few Fedora releases feels like it carries a certain amount of support on the road towards world domination. :-)

I'd love to see someone make a version of GNU tar that uses io_uring; it's really slow for many small files on rotating media. Also, well, a faster dpkg. :-)

Krebs on SecurityCustomer Care Giant TTEC Hit By Ransomware

TTEC, [NASDAQ: TTEC], a company used by some of the world’s largest brands to help manage customer support and sales online and over the phone, is dealing with disruptions from a network security incident resulting from a ransomware attack, KrebsOnSecurity has learned.

While many companies have been laying off or furloughing workers in response to the Coronavirus pandemic, TTEC has been massively hiring. Formerly TeleTech Holdings Inc., Englewood, Co.-based TTEC now has nearly 60,000 employees, most of whom work from home and answer customer support calls on behalf of a large number of name-brand companies, like Bank of America, Best Buy, Credit Karma, Dish Network, Kaiser Permanente, USAA and Verizon.

On Sept. 14, KrebsOnSecurity heard from a reader who passed on an internal message apparently sent by TTEC to certain employees regarding the status of a widespread system outage that began on Sunday, Sept. 12.

“We’re continuing to address the system outage impacting access to the network, applications and customer support,” reads an internal message sent by TTEC to certain employees.

TTEC has not responded to requests for comment. A phone call placed to the media contact number listed on an August 2021 TTEC earnings release produced a message saying it was a non-working number.

[Update, 6:20 p.m. ET: TTEC confirmed a ransomware attack. See the update at the end of this piece for their statement]

TTEC’s own message to employees suggests the company’s network may have been hit by the ransomware group “Ragnar Locker,” (or else by a rival ransomware gang pretending to be Ragnar). The message urged employees to avoid clicking on a file that suddenly may have appeared in their Windows start menu called “!RA!G!N!A!R!”

“DO NOT click on this file,” the notice read. “It’s a nuisance message file and we’re working on removing it from our systems.”

Ragnar Locker is an aggressive ransomware group that typically demands millions of dollars worth of cryptocurrency in ransom payments. In an announcement published on the group’s darknet leak site this week, the group threatened to publish the full data of victims who seek help from law enforcement and investigative agencies following a ransomware attack.

One of the messages texted to TTEC employees included a link to a Zoom videoconference line at Clicking that link opened a Zoom session in which multiple TTEC employees who were sharing their screens took turns using the company’s Global Service Desk, an internal TTEC system for tracking customer support tickets.

The TTEC employees appear to be using the Zoom conference line to report the status of various customer support teams, most of which are reporting “unable to work” at the moment.

For example, TTEC’s Service Desk reports that hundreds of TTEC employees assigned to work with Bank of America’s prepaid services are unable to work because they can’t remotely connect to TTEC’s customer service tools. More than 1,000 TTEC employees are currently unable to do their normal customer support work for Verizon, according to the Service Desk data. Hundreds of employees assigned to handle calls for Kaiser Permanente also are unable to work.

“They’ve been radio silent all week except to notify employees to take another day off,” said the source who passed on the TTEC messages, who spoke to KrebsOnSecurity on condition of anonymity. “As far as I know, all low-level employees have another day off today.”

The extent and severity of the incident at TTEC remains unknown. It is common for companies to disconnect critical systems in the event of a network intrusion, as part of a larger effort to stop the badness from spreading elsewhere. Sometimes disconnecting everything actually does help, or at least helps to keep the attack from spreading to partner networks. But it is those same connections to partner companies that raises concern in the case of TTEC’s ongoing outage.

In the meantime, if you’re unlucky enough to need to make a customer service call today, there’s a better-than-even chance you will experience….wait for it…longer-than-usual hold times.

This is a developing story. Further details or updates will be noted here with a date and time stamp.

Update, 5:37 p.m. ET: TTEC responded with the following statement:

TTEC is committed to cyber security, and to protecting the integrity of our clients’ systems and data. We recently became aware of a cybersecurity incident that has affected certain TTEC systems.  Although as a result of the  incident, some of our data was encrypted and business activities at several facilities have been temporarily disrupted, the company continuous to serve its global clients. TTEC immediately activated its information security incident response business continuity protocols, isolated the systems involved, and took other appropriate measures to contain the incident. We are now in the process of  carefully and deliberately restoring the systems that have been involved.

We also launched an investigation, typical under the circumstances, to determine the potential impacts.  In serving our clients TTEC, generally, does not maintain our clients’ data, and the investigation to date has not identified compromise to clients’ data. That investigation is on-going and we will take additional action, as appropriate, based on the investigation’s results. This is all the information we have to share until our investigation is complete.

Cryptogram Identifying Computer-Generated Faces

It’s the eyes:

The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils.

And the arms race continues….

Research paper.

Planet DebianIan Jackson: Get source to Debian packages only via dgit; "official" git links are beartraps


dgit clone sourcepackage gets you the source code, as a git tree, in ./sourcepackage. cd into it and dpkg-buildpackage -uc -b.

Do not use: "VCS" links on official Debian web pages like; "debcheckout"; searching Debian's gitlab ( These are good for Debian experts only.

If you use Debian's "official" source git repo links you can easily build a package without Debian's patches applied.[1] This can even mean missing security patches. Or maybe it can't even be built in a normal way (or at all).


It's complicated. There is History.

Debian's "most-official" centralised source repository is still the Debian Archive, which is a system based on tarballs and patches. I invented the Debian source package format in 1992/3 and it has been souped up since, but it's still tarballs and patches. This system is, of course, obsolete, now that we have modern version control systems, especially git.

Maintainers of Debian packages have invented ways of using git anyway, of course. But this is not standardised. There is a bewildering array of approaches.

The most common approach is to maintain git tree containing a pile of *.patch files, which are then often maintained using quilt. Yes, really, many Debian people are still using quilt, despite having git! There is machinery for converting this git tree containing a series of patches, to an "official" source package. If you don't use that machinery, and just build from git, nothing applies the patches.

[1] This post was prompted by a conversation with a friend who had wanted to build a Debian package, and didn't know to use dgit. They had got the source from salsa via a link on tracker.d.o, and built .debs without Debian's patches. This not a theoretical unsoundness, but a very real practical risk.

Future is not very bright

In 2013 at the Debconf in Vaumarcus, Joey Hess, myself, and others, came up with a plan to try to improve this which we thought would be deployable. (Previous attempts had failed.) Crucially, this transition plan does not force change onto any of Debian's many packaging teams, nor onto people doing cross-package maintenance work. I worked on this for quite a while, and at a technical level it is a resounding success.

Unfortunately there is a big limitation. At the current stage of the transition, to work at its best, this replacement scheme hopes that maintainers who update a package will use a new upload tool. The new tool fits into their existing Debian git packaging workflow and has some benefits, but it does make things more complicated rather than less (like any transition plan must, during the transitional phase). When maintainers don't use this new tool, the standardised git branch seen by users is a compatibility stub generated from the tarballs-and-patches. So it has the right contents, but useless history.

The next step is to allow a maintainer to update a package without dealing with tarballs-and-patches at all. This would be massively more convenient for the maintainer, so an easy sell. And of course it links the tarballs-and-patches to the git history in a proper machine-readable way.

We held a "git packaging requirements-gathering session" at the Curitiba Debconf in 2019. I think the DPL's intent was to try to get input into the git workflow design problem. The session was a great success: my existing design was able to meet nearly everyone's needs and wants. The room was obviously keen to see progress. The next stage was to deploy tag2upload. I spoke to various key people at the Debconf and afterwards in 2019 and the code has been basically ready since then.

Unfortunately, deployment of tag2upload is mired in politics. It was blocked by a key team because of unfounded security concerns; positive opinions from independent security experts within Debian were disregarded. Of course it is always hard to get a team to agree to something when it's part of a transition plan which treats their systems as an obsolete setup retained for compatibility.

Current status

If you don't know about Debian's git packaging practices (eg, you have no idea what "patches-unapplied packaging branch without .pc directory" means), and don't want want to learn about them, you must use dgit to obtain the source of Debian packages. There is a lot more information and detailed instructions in dgit-user(7).

Hopefully either the maintainer did the best thing, or, if they didn't, you won't need to inspect the history. If you are a Debian maintainer, you should use dgit push-source to do your uploads. This will make sure that users of dgit will see a reasonable git history.

edited 2021-09-15 14:48 Z to fix a typo

comment count unavailable comments

Worse Than FailureCodeSOD: The Programmer's Motto and Other Comments

We've got a lovely backlog of short snippets of code, and it's been a long time since our last smorgasbord, so let's take a look at some… choice cuts.

Let's open with a comment, found by Russell F:

//setting Text="" on the front end some how stoped a error on tftweb-02a on prealpha it may have also needed a new compiled version //but after two + hours it doesnt work and i am not shure i acutal did anything

"After two+ hours, it doesn't work, and I'm not sure I actually did anything," describes the experience of being a programmer so well, that I honestly think it's my new motto. The key difference is that, if it doesn't work after two hours, you do have to keep going until it does.

From an Anonymous submitter, we have:

[Required(ErrorMessage = "This field is required."), ValidateMaxLength(Length = 10)] [Range(typeof(bool), "false", "true", ErrorMessage = "Enter valid value.")] public Nullable<bool> Nonbillable { get; set; }

Now, this is probably actually correct, because it's possible that the underlying data store might have invalid entries, so marking a Required field as Nullable probably makes sense. Then again, the chance of having invalid data in your datastore is a WTF, and apparently, it's a big problem for this API, as our submitter adds: "Looking at a very confused public-facing API - everything is like this."

"R3D3-1" was checking a recent release of Emacs, and found this function in python.el.gz:

(defun python-hideshow-forward-sexp-function (arg) "Python specific `forward-sexp' function for `hs-minor-mode'. Argument ARG is ignored." arg ; Shut up, byte compiler. (python-nav-end-of-defun) (unless (python-info-current-line-empty-p) (backward-char)))

"Shut up, byte compiler". In this case, the programmer was trying to get an "unused parameter" warning to go away by using the parameter.

"R3D3-1" adds:

The comment made me chuckle a little, not a major WTF.
The correct solution in Emacs Lisp would have been to rename arg to _arg. This would be clear to not only the byte compiler, but also to other programmers.

And finally, a frustrated Cassi found this comment:

// TODO: handle this correctly

Cassi titled this "So TODO it already!" If you're writing code you know is incorrect, it might be a good time to stop and re-evaluate what you're doing. Though, Cassi goes on to add:

I suppose it could be argued, since I'm only coming across it now, that this comment was a good enough "solution" for the five years it's been sitting in the code.

Perhaps correctness isn't as important as we think.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram Lightning Cable with Embedded Eavesdropping

Normal-looking cables (USB-C, Lightning, and so on) that exfiltrate data over a wireless network.

I blogged about a previous prototype here.


Krebs on SecurityMicrosoft Patch Tuesday, September 2021 Edition

Microsoft today pushed software updates to plug dozens of security holes in Windows and related products, including a vulnerability that is already being exploited in active attacks. Also, Apple has issued an emergency update to fix a flaw that’s reportedly been abused to install spyware on iOS products, and Google‘s got a new version of Chrome that tackles two zero-day flaws. Finally, Adobe has released critical security updates for Acrobat, Reader and a slew of other software.

Four of the flaws fixed in this patch batch earned Microsoft’s most-dire “critical” rating, meaning they could be exploited by miscreants or malware to remotely compromise a Windows PC with little or no help from the user.

Top of the critical heap is CVE-2021-40444, which affects the “MSHTML” component of Internet Explorer (IE) on Windows 10 and many Windows Server versions. In a security advisory last week, Microsoft warned attackers already are exploiting the flaw through Microsoft Office applications as well as IE.

The critical bug CVE-2021-36965 is interesting, as it involves a remote code execution flaw in “WLAN AutoConfig,” the component in Windows 10 and many Server versions that handles auto-connections to Wi-Fi networks. One mitigating factor here is that the attacker and target would have to be on the same network, although many systems are configured to auto-connect to Wi-Fi network names with which they have previously connected.

Allan Liska, senior security architect at Recorded Future, said a similar vulnerability — CVE-2021-28316 — was announced in April.

“CVE-2021-28316 was a security bypass vulnerability, not remote code execution, and it has never been reported as publicly exploited,” Liska said. “That being said, the ubiquity of systems deployed with WLAN AutoConfig enabled could make it an attractive target for exploitation.”

Another critical weakness that enterprises using Azure should prioritize is CVE-2021-38647, which is a remote code execution bug in Azure Open Management Infrastructure (OMI) that has a CVSS Score of 9.8 (10 is the worst). It was reported and detailed by researchers at, who said CVE-2021-38647 was one of four bugs in Azure OMI they found that Microsoft patched this week.

“We conservatively estimate that thousands of Azure customers and millions of endpoints are affected,”’s Nir Ohfeld wrote. “In a small sample of Azure tenants we analyzed, over 65% were unknowingly at risk.”

Kevin Breen of Immersive Labs calls attention to several “privilege escalation” flaws fixed by Microsoft this month, noting that while these bugs carry lesser severity ratings, Microsoft considers them more likely to be exploited by bad guys and malware.

CVE-2021-38639 and CVE-2021-36975 have also been listed as ‘exploitation more likely’ and together cover the full range of supported Windows versions,” Breem wrote. “I am starting to feel like a broken record when talking about privilege escalation vulnerabilities. They typically have a lower CVSS score than something like Remote Code Execution, but these local exploits can be the linchpin in the post-exploitation phases of an experienced attacker. If you can block them here you have the potential to significantly limit their damage. If we assume a determined attacker will be able to infect a victim’s device through social engineering or other techniques, I would argue that patching these is even more important than patching some other Remote Code execution vulnerabilities.”

Apple on Monday pushed out an urgent security update to fix a “zero-click” iOS vulnerability (CVE-2021-30860) reported by researchers at Citizen Lab that allows commands to be run when files are opened on certain Apple devices. Citizen Lab found that an exploit for CVE-2021-30860 was being used by the NSO Group, an Israeli tech company whose spyware enables the remote surveillance of smartphones.

Google also released a new version of its Chrome browser on Monday to fix nine vulnerabilities, including two that are under active attack. If you’re running Chrome, keep a lookout for when you see an “Update” tab appear to the right of the address bar. If it’s been a while since you closed the browser, you might see the Update button turn from green to orange and then red. Green means an update has been available for two days; orange means four days have elapsed, and red means your browser is a week or more behind on important updates. Completely close and restart the browser to install any pending updates.

As it usually does on Patch Tuesday, Adobe also released new versions of Reader, Acrobat and a large number of other products. Adobe says it is not aware of any exploits in the wild for any of the issues addressed in its updates today.

For a complete rundown of all patches released today and indexed by severity, check out the always-useful Patch Tuesday roundup from the SANS Internet Storm Center. And it’s not a bad idea to hold off updating for a few days until Microsoft works out any kinks in the updates: usually has the lowdown on any patches that are causing problems for Windows users.

On that note, before you update please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

If you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a decent chance other readers have experienced the same and may chime in here with useful tips.

Planet DebianSven Hoexter: PV - Monitoring Envertech Microinverter via

Some time ago I looked briefly at an Envertech data logger for small scale photovoltaic setups. Turned out that PV inverter are kinda unreliable, and you really have to monitor them to notice downtimes and defects. Since my pal shot for a quick win I've cobbled together another Python script to query the portal at, and report back if the generated power is down to 0. The script is currently run on a vserver via cron and reports back via the system MTA. So yeah, you need to have something like that already at hand.

Script and Configuration

You've to provide your PV systems location with latitude and longitude so the script can calculate (via python3-suntime) the sunrise and sunset times. At the location we deal with we expect to generate some power at least from sunrise + 1h to sunet - 1h. That is tunable via the configuration option toleranceSeconds.

Retrieving the stationId is a bit ugly because it's not provided via any API, instead it's rendered serverside into the website. So I just logged in on the portal and picked it up by looking into the page source. API

I guess this is some classic in the IoT land, but neither the documentation provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the Firefox Web Developer Tools are:

  1. Login - POST, sent userName and pwd containing your login name and password. The response JSON is very explicit if your login was not successful and why.
  2. Store the session cookie called ASP.NET_SessionId for use on all subsequent requests.
  3. Retrieve station info - POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns a JSON with an object named Data. The field Power contains the currently generated power as a float with two digits (e.g. 0.01).
  4. Logout - POST, sent ASP.NET_SessionId.

Some Surprises

There were a few surprises, maybe they help others dealing with an Envertech setup.

  1. The portal truncates passwords at 16 chars.
  2. The "Forget Password?" function mails you back the password in plain text (that's how I learned about 1.).
  3. The login API endpoint reporting the exact reason why the login failed is somewhat out of fashion. Though this one is probably not a credential stuffing target because there is no money to make, so don't care.
  4. The data logger reports the data to at port 10013.
  5. There is some checksuming done on the reported data, but the system is not replay safe. So you can sent it any valid data string at a later time and get wrong data recorded.
  6. People at decoded some values but could not figure out the checksuming so far.

Planet DebianJoachim Breitner: A Candid explainer: Quirks

This is the fifth and final post in a series about the interface description language Candid.

If you made it this far, you now have a good understanding of what Candid is, what it is for and how it is used. For this final post, I’ll put the spotlight on specific aspects of Candid that are maybe surprising, or odd, or quirky. This section will be quite opinionated, and could maybe be called “what I’d do differently if I’d re-do the whole thing”.

Note that these quirks are not serious problems, and they don’t invalidate the overall design. I am writing this up not to discourage the use of Candid, but merely help interested parties to understand it better.

References in the wire format

When the work on Candid began at DFINITY, the Internet Computer was still far away from being a thing, and many fundamental aspects about it were still in the air. I particular, there was still talk about embracing capabilities as a core feature of the application model, which would be implemented as opaque references on the system level, likely building on WebAssembly’s host reference type proposal (which only landed recently), and could be used to model access permissions, custom tokens and many other things.

So Candid is designed with that in mind, and you’ll find that its wire format is not just a type table and a value table, but actually

a triple (T, M, R), where T (“type”) and M (“memory”) are sequences of bytes and R (“references”) is a sequence of references.

Also the wire format for values of function service tyeps have an extra byte to distinguish between “public references” (represented by a principal and possible a method name in the data part), and these opaque references.

Alas, references never made it into the Internet Computer, so all Candid implementations simply ignore that part of the specification. But it’s still in the spec, and if it confused you before, now you know why.

Hashed field names

Candid record and variant types look like they have textual field names:

type T = record { syndactyle : nat; trustbuster: bool }

But that is actually only true superficially. The wire format for Candid only stores hashes of field names. So the above is actually equivalent to

type T = record { 4260381820 : nat; 3504418361 : bool }

or, for that matter, to

type T = record { destroys : bool; rectum : nat }

(Yes, I used an english word list to find these hash collisions. There aren’t that many actually.)

The benefit of such hashing is that the messages are a bit smaller in most (not all) cases, but it is a big annoyance for dynamic uses of Candid. It’s the reason why tools like dfx, if they don’t know the Candid interface of a service, will print the result with just the numerical hash, letting you guess which field is which.

It also complicates host languages that derive Candid types from the host language, like Motoko, as some records (e.g. record { trustbuster: bool; destroys : int }) with field name hash collisions can not be represented in Candid, and either the host language’s type system needs to be Candid aware now (as is the case of Motoko), or serialization/deserialization will fail at runtime, or odd bugs can happen.

(More discussion of this issue).


Many languages have a built-in notion of a tuple type (e.g. (Int, Bool)), but Candid does not have such a type. The only first class product type is records.

This means that tuples have to encoded as records somehow. Conveniently(?) record fields are just numbers after all, so the type (Int, Bool) would be mapped to the type

record { 0 : int; 1 : bool }

So tuples can be expressed. But from my experience implementing the type mappings for Motoko and Haskell this is causing headaches. To get a good experience when importing from Candid, the tools have to try to reconstruct which records may have been tuples originally, and turn them into tuples.

The main argument for the status quo is that Candid types should be canonical, and there should not be more than one product type, and records are fine, and there needs to be no anonymous product type. But that has never quite resonated with me, given the practical reality of tuple types in many host languages.

Argument sequences

Did I say that Candid does not have tuple types? Turns out it does, sort of. There is no first class anonymous product, but since functions take sequences of arguments and results, there is a tuple type right there:

func foo : (bool, int) -> (int, bool)

Again, I found that ergonomic interaction with host languages becomes relatively unwieldy by requiring functions to take and return sequences of values. This is especially true for languages where functions take one argument value or return one result type (the latter being very common). Here, return sequences of length one are turned into that type directly, longer argument sequences turn into the host language’s tuple type, and nullary argument sequences turn into the idiomatic unit type. But this means that the types (int, bool) -> () and (record { 0: int, 1: bool}) -> () may be mapped to the same host language type, which causes problems when you hope to encode all necessary Candid type information in the host language.

Another oddity with argument and result sequences is that you can give names to the entries, e.g. write

func hello : (last_name : text; first_name : text) -> ()

but these names are completely ignored! So while this looks like you can, for example, add new optional arguments in the middle, such as

func hello : (last_name : text; middle_name: opt text, first_name : text) -> ()

without breaking clients, this does not have the effect you think it has and will likely break.

My suggestion is to never put names on function arguments and result values in Candid interfaces, and for anything that might be extended with new fields or where you want to name the arguments, use a single record type as the only argument:

func hello : (record { last_name : text; first_name : text}) -> ()

This allows you to add and remove arguments more easily and reliably.

Type “shorthands”

The Candid specification defines a system of types, and then adds a number of “syntactic short-hands”. For example, if you write blob in a Candid type description, it ought to means the same as vec nat8.

My qualm with that is that it doesn’t always mean the same. A Candid type description is interpreted by a number of, say, “consumers”. Two such consumers are part of the Candid specification:

  • The specification that defines the wire format for that type
  • The upgrading (subtyping) rules

But there are more! For every host language, there is some mapping from Candid types to host language types, and also generic tools like Candid UI are consumers of the type algebra. If these were to take the Candid specification as gospel, they would be forced to treat blob and vec nat8 the same, but that would be quite unergonomic and might cause performance regressions (most language try to map blob to some compact binary data type, while vec t tends to turn into some form of array structure).

So they need to be pragmatic and treat blob and vec nat8 differently. But then, for all practical purposes, blob is not just a short-hand of vec nat8. They are different types that just happens to have the same wire representations and subtyping relations.

This affects not just blob, but also “tuples” (record { blob; int; bool }) and field “names”, as discussed above.

The value text format

For a while after defining Candid, the only implementation was in Motoko, and all the plumbing was automatic, so there was never a need for users to to explicitly handle Candid values, as all values were Motoko values. Still, for debugging and testing and such things, we eventually needed a way to print out Candid values, so the text format was defined (“To enable convenient debugging, the following grammar specifies a text format for values…”).

But soon the dfx tool learned to talk to canisters, and now users needed to enter Candid value on the command line, possibly even when talking to canisters for which the interface was not known to dfx. And, sure enough, the textual interface defined in the Candid spec was used.

Unfortunately, since it was not designed for that use case, it is rather unwieldy:

  • It is quite verbose. You have to write record { … }, not just { … }. Vectors are written vec { …; …} instead of some conventional syntax like […, …]. Variants are written as variant { error = "…"} with braces that don’t any value here, and something like #error "…" might have worked as well.

    With a bit more care, a more concise and ergonomic syntax might have been possible.

  • It wasn’t designed to be sufficient to create a Candid value from it. If you write 5 it’s unclear whether that’s a nat or an int16 or what (and all of these have different wire representations). Type annotations were later added, but are relatively unwieldy, and don’t cover all cases (e.g. a service reference with a recursive type cannot be represented in the textual format at the moment).

  • Not really the fault of the textual format, but some useful information about the types is not reflected in the type description that’s part of the wire format. In particular not the field names, and whether a value was intended to be binary data (blob) or a list of small numbers (vec nat8), so pretty-printing such values requires guesswork. The Haskell library even tries to brute-force the hash to guess the field name, if it is short or in a english word list!

In hindsight I think it was too optimistic to assume that correct static type information is always available, and instead of actively trying to discourage dynamic use, Candid might be better if we had taken these (unavoidable?) use cases into account.

Custom wire format

At the beginning of this post, I have a “Candid is …” list. The list is relatively long, and the actual wire format is just one bullet point. Yes, defining a wire format that works isn’t rocket science, and it was easiest to just make one up. But since most of the interesting meat of Candid is in other aspects (subtyping rules, host language integration), I wonder if it would have been better to use an existing, generic wire format, such as CBOR, and build Candid as a layer on top.

This would give us plenty of tools and libraries to begin with. And maybe it would have reduced barrier of entry for developers, which now led to the very strange situation that DFINITY advocates for the universal use of Candid on the Internet Computer, so that all services can smoothly interact, but two of the most important services on the Internet Computer (the registry and the ledger) use Protobuf as their primary interface format, with Candid interfaces missing or an afterthought.

Sideways Interface Evolution

This is not a quirk of Candid itself, but rather an idiom of how you can use Candid that emerged from our solution for record extensions and upgrades.

Consider our example from before, a service with interface

service { add_user : (record { login : text; name : text }) -> () }

where you want to add an age field, which should be a number.

The “official” way of doing that is to add that field with an optional type:

service { add_user : (record { login : text; name : text; age : opt nat }) -> () }

As explained above, this will not break old clients, as the decoder will treat a missing argument as null. So far so good.

But often when adding such a field you don’t want to bother new clients with the fact that this age was, at some point in the past, not there yet. And you can do that! The trick is to distinguish between the interface you publish and the interface you implement. You can (for example in your documentation) state that the interface is

service { add_user : (record { login : text; name : text; age : nat }) -> () }

which is not a subtype of the old type, but it is the interface you want new clients to work with. And then your implementation uses the type with opt nat. Calls from old clients will come through as null, and calls from new clients will come through as opt 42.

We can see this idiom used in the Management Canister of the Internet Computer. The current documented interface only mentions a controllers : vec principal field in the settings, but the implementation still can handle both the old controller : principal and the new controllers field.

It’s probably advisable to let your CI system check that new versions of your service continue to implement all published interfaces, including past interfaces. But as long as the actual implementation’s interface is a subtype of all interfaces ever published, this works fine.

This pattern is related to when your service implements, say, http_request (so its implemented interface is a subtype of that common interface), but does not include that method in the published documentation (because clients of your service don’t need to call it).

Self-describing Services

As you have noticed, Candid was very much designed assuming that all parties always have the service type of services they want to interact with. But the Candid specification does not define how one can obtain the interface of a given service, and there isn’t really a an official way to do that on the Internet Computer.

That is unfortunate, because many interesting features depend on that: Such as writing import C "ic:7e6iv-biaaa-aaaaf-aaada-cai" in your Motoko program, and having it’s type right there. Or tools like, that allow you to interact with any canister right there.

One reason why we don’t really have that feature yet is because of disagreements about how dynamic that feature should be. Should you be able to just ask the canister for its interface (and allow the canister to vary the response, for example if it can change its functionality over time, even without changing the actual wasm code)? Or is the interface a static property of the code, and one should be able to query the system for that data, without the canister’s active involvement. Or, completely different, should interfaces be distributed out of band, maybe together with API documentation, or in some canister registry somewhere else?

I always leaned towards the first of these options, but not convincingly enough. The second options requires system assistance, so more components to change, more teams to be involved that maybe intrinsically don’t care a lot about this feature. And the third might have emerged as the community matures and builds such infrastructure, but that did not happen yet.

In the end I sneaked in an implementation of the first into Motoko, arguing that even if we don’t know yet how this feature will be implemented eventually, we all want the feature to exist somehow, and we really really want to unblock all the interesting applications it enables (e.g. Candid UI). That’s why every Motoko canister, and some rust canisters too, implements a method

__get_candid_interface_tmp_hack : () -> (text)

that one can use to get the Candid interface file.

The name was chosen to signal that this may not be the final interface, but like all good provisional solutions, it may last longer than intended. If that’s the case, I’m not sorry.

This concludes my blog post series about Candid, for now. If you want to know more, feel free to post your question on the DFINTY developer forum, and I’ll probably answer.

Planet DebianJonathan Dowland: GHC rewrite rules

The Glasgow Haskell Compiler (GHC) has support for user-supplied rewrite rules, which applied during one of the compiler optimisation stages. An example rule is

      "streamFilter fuse" forall f g xs.
          streamFilter f (streamFilter g xs) = streamFilter (f.g) xs

I spent some time today looking at these more closely.

In order for rewrite rules to be applied, optimisation needs to be enabled. This conflicts with interactive use of GHC, so you can't explore these things in GHCi. I think rewrite rules are enabled by default (with optimisation), but you can ask for them explicitly. When investigating these it's also useful to ask ghc to always recompile, otherwise you have to change the source or manually remove .hi or .o files (etc.) between invocations. A starting set of command-line options to use then, are

-O -fenable-rewrite-rules -fforce-recomp

GHC runs several compilation stages, and the source program is transformed into several different languages or language dialects as it goes. Before the phase where rewrite rules are applied, some other optimisations take place, and the source gets desugared. You can see the results of the desugaring by passing the argument -ddump-ds. Here's an example program

main = print (unStream (streamFilter (>3) (streamFilter (<10)
    (mkStream [0..20]))))

And here's what it looks like after the first pass optimisation and desugaring:

  = print
      ($fShow[] $fShowInteger)
            (let {
               ds = 3 } in
             \ ds -> > $fOrdInteger ds ds)
               (let {
                  ds = 10 } in
                \ ds -> < $fOrdInteger ds ds)
               (mkStream (enumFromTo $fEnumInteger 0 20)))))

(Note: I used -dsuppress-all and -dsuppress-uniques to improve the clarity of the above output. See Suppressing unwanted information for further details).

Those short-hand sections ((<3)) in the input program are expanded to something quite a lot longer. Out of curiosity I tried it again with plain lambdas, not sections, and the result was smaller

  = print
      ($fShow[] $fShowInteger)
            (\ x -> > $fOrdInteger x 3)
               (\ x -> < $fOrdInteger x 10)
               (mkStream (enumFromTo $fEnumInteger 0 20)))))

Rewrite rules happen after this. Once they're done (and several other passes), the program is translated into an intermediate representation called Tiny Core. This language faintly resembles the input Haskell. GHC will output the Tiny Core program if you supply the argument -ddump-simpl. Here's (most) of the program in Tiny Core (I've substituted some constants for clarity):

main  = hPutStr' stdout main1 True
main1 = $fShowInteger_$cshowList (catMaybes1 (main_go 0)) []
  = \ x ->
      case gtInteger# x 20 of {
        DEFAULT ->
          case ltInteger# x 10 of {
            DEFAULT -> main_go (plusInteger x 1);
            1# ->
              case gtInteger# x 3 of {
                DEFAULT -> main_go (plusInteger x 1);
                1# -> : (Just x) (main_go (plusInteger x 1))
        1# -> []

After Tiny Core, GHC continues to translate the program into other forms (including STG, CMM, ASM) and you can ask GHC to dump those representations too, but this is all downstream from the rewrites so not relevant to them.

The rewrite rule at the top of this blog post is never applied: It doesn't get a chance, because the function it operates on (streamFilter) is inlined by GHC. GHC can detect this in some circumstances (-Winline-rule-shadowing). You instruct GHC to report on which rules fired with -ddump-rule-firings and can see before-and-after snippets of Tiny Core for each rule applied with -ddump-rule-rewrites.

I played around with adding {-# NOINLINE functionName #-} pragmas to disable inlining various functions to try and provoke a situation where the above rule could match, but it was a losing battle: GHC's built-in optimisations are just too good. But, it's also moot: the outcome I want (the filters to be fused) is happening, it's just the built-in rewrite rules are achieving it, once striot's functions have been inlined away.

Cryptogram ProtonMail Now Keeps IP Logs

After being compelled by a Swiss court to monitor IP logs for a particular user, ProtonMail no longer claims that “we do not keep any IP logs.”

EDITED TO ADD (9/14): This seems to be more complicated. ProtonMail is not yet saying that they keep logs. Their privacy policy still states that they do not keep logs except in certain circumstances, and outlines those circumstances. And ProtonMail’s warrant canary has an interesting list of data orders they have received from various authorities, whether they complied, and why or why not.

Cryptogram Tracking People by their MAC Addresses

Yet another article on the privacy risks of static MAC addresses and always-on Bluetooth connections. This one is about wireless headphones.

The good news is that product vendors are fixing this:

Several of the headphones which could be tracked over time are for sale in electronics stores, but according to two of the manufacturers NRK have spoken to, these models are being phased out.

“The products in your line-up, Elite Active 65t, Elite 65e and Evolve 75e, will be going out of production before long and newer versions have already been launched with randomized MAC addresses. We have a lot of focus on privacy by design and we continuously work with the available security measures on the market,” head of PR at Jabra, Claus Fonnesbech says.

“To run Bluetooth Classic we, and all other vendors, are required to have static addresses and you will find that in older products,” Fonnesbech says.

Jens Bjørnkjær Gamborg, head of communications at Bang & Olufsen, says that “this is products that were launched several years ago.”

“All products launched after 2019 randomize their MAC-addresses on a frequent basis as it has become the market standard to do so,” Gamborg says.

EDITED TO ADD (9/13): It’s not enough to randomly change MAC addresses. Any other plaintext identifiers need to be changed at the same time.

Planet DebianDebian Social Team: Some site updates

We’re in the process of upgrading to Debian 11 (bullseye). If you come across any issues, feel free to raise them on the -social IRC channel on oftc (also accessible via Matrix) and we’ll look into it as soon as we have a chance.

We’re aware that live streaming isn’t currently working on our PeerTube instance, we have had some issues with this relatively new feature before, although at the moment we seem to be affected by upstream issue #4390.

Worse Than FailureCodeSOD: Wise About Bits

The HP3000 was the first mini-computer that supported time-sharing. It launched in 1972, and HP didn't end-of-life it until 2010, and there are still third-party vendors supporting them.

Leonora's submission is some code she encountered a number of years ago, but not as many as you might think. It's in Pascal, and it's important to note that this version of Pascal definitely has bitwise operators. But, if you're programming on a 40 year old minicomputer, maybe you couldn't do an Internet search, and maybe Steve from down the hall had bogarted the one manual HP provided for the computer so you can't look it up because "he's using it for reference."

So you end up doing your best with no idea:

FUNCTION BITON(A , B : INTEGER): BOOLEAN; VAR C : INTEGER; BEGIN CASE A OF 15 : C:=1; 14 : C:=2; 13 : C:=4; 12 : C:=8; 11 : C:=16; 10 : C:=32; 9 : C:=64; 8 : C:=128; 7 : C:=256; 6 : C:=512; 5 : C:=1024; 4 : C:=2048; 3 : C:=4096; 2 : C:=8192; 1 : C:=16384; 0 : C:=32768; OTHERWISE BITON:=FALSE; END; IF ((B DIV C) MOD 2) = 1 THEN BITON:=TRUE ELSE BITON:=FALSE; END;

One thing I appreciate about Pascal code is that, even if you haven't touched the language since 1998 when you were the last class in your high school to learn Pascal instead of C++, it's incredibly readable. This method is very clear about what it does: it maps bits to powers of two, and then checks via division and modulus if that bit is set. It's, in some ways, the most obvious way to implement a bit check if you didn't know anything about bitwise operations in your language.

This BITON function lays the foundation for an entire family of bitwise operations, re-implemented from scratch. You want to set bits individually? You already know how it works.

FUNCTION SETBITON(A, B : INTEGER) : INTEGER; VAR C : INTEGER; BEGIN CASE A OF 15 : C:=1; 14 : C:=2; 13 : C:=4; 12 : C:=8; 11 : C:=16; 10 : C:=32; 9 : C:=64; 8 : C:=128; 7 : C:=256; 6 : C:=512; 5 : C:=1024; 4 : C:=2048; 3 : C:=4096; 2 : C:=8192; 1 : C:=16384; 0 : C:=32768; OTHERWISE C:=0; END; IF NOT BITON(A,B) THEN SETBITON:=B + C ELSE SETBITON:=B; END; FUNCTION SETBITOFF(A, B : INTEGER) : INTEGER; VAR C : INTEGER; BEGIN CASE A OF 15 : C:=1; 14 : C:=2; 13 : C:=4; 12 : C:=8; 11 : C:=16; 10 : C:=32; 9 : C:=64; 8 : C:=128; 7 : C:=256; 6 : C:=512; 5 : C:=1024; 4 : C:=2048; 3 : C:=4096; 2 : C:=8192; 1 : C:=16384; 0 : C:=32768; OTHERWISE C:=0; END; IF BITON(A,B) THEN SETBITOFF:=B - C ELSE SETBITOFF:=B; END;

The same pattern, complete with 16 bits hand-coded, and a check: if the bit is on, we add or subtract a value to set it off.

And, if you want an and operator, why, that's just going to call those methods:


Again, I want to stress, while different Pascal compilers have different implementations of bitwise operations, this version absolutely had bitwise operations. As do most of them. The key difference is whether or not it supports C-style <</>> operators, or uses a more Pascal-flavored shl/shr.

This developer obviously didn't know about them, and didn't know about the right way to do exponentiation either, which would make those giant case statements go away. Not as well as bitshifting, but still, away. But they clearly did their best- the code is readable, clear, and obvious in how it functions. Just, y'know, not as obvious as using the built-ins.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Planet DebianJohn Goerzen: Facebook’s Blocking Decisions Are Deliberate – Including Their Censorship of Mastodon

In the aftermath of my report of Facebook censoring mentions of the open-source social network Mastodon, there was a lot of conversation about whether or not this was deliberate.

That conversation seemed to focus on whether a human speficially added to some sort of blacklist. But that’s not even relevant.

OF COURSE it was deliberate, because of how Facebook tunes its algorithm.

Facebook’s algorithm is tuned for Facebook’s profit. That means it’s tuned to maximize the time people spend on the site — engagement. In other words, it is tuned to keep your attention on Facebook.

Why do you think there is so much junk on Facebook? So much anti-vax, anti-science, conspiracy nonsense from the likes of Breitbart? It’s not because their algorithm is incapable of surfacing the good content; we already know it can because they temporarily pivoted it shortly after the last US election. They intentionally undid its efforts to make high-quality news sources more prominent — twice.

Facebook has said that certain anti-vax disinformation posts violate its policies. It has an extremely cumbersome way to report them, but it can be done and I have. These reports are met with either silence or a response claiming the content didn’t violate their guidelines.

So what algorithm is it that allows Breitbart to not just be seen but to thrive on the platform, lets anti-vax disinformation survive even a human review, while banning mentions of Mastodon?

One that is working exactly as intended.

We may think this algorithm is busted. Clearly, Facebook does not. If their goal is to maximize profit by maximizing engagement, the algorithm is working exactly as designed.

I don’t know if was specifically blacklisted by a human. Nor is it relevant.

Facebook’s choice to tolerate and promote the things that service its greed for engagement and money, even if they are the lowest dregs of the web, is deliberate. It is no accident that Breitbart does better than Mastodon on Facebook. After all, which of these does its algorithm detect keep people engaged on Facebook itself more?

Facebook removes the ban

You can see all the screenshots of the censorship in my original post. Now, Facebook has reversed course:

We also don’t know if this reversal was human or algorithmic, but that still is beside the point.

The point is, Facebook intentionally chooses to surface and promote those things that drive engagement, regardless of quality.

Clearly many have wondered if tens of thousands of people have died unnecessary deaths over COVID as a result. One whistleblower says “I have blood on my hands” and President Biden said “they’re killing people” before “walking back his comments slightly”. I’m not equipped to verify those statements. But what do they think is going to happen if they prioritize engagement over quality? Rainbows and happiness?

Planet DebianBits from Debian: New Debian Developers and Maintainers (July and August 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Aloïs Micard (creekorful)
  • Sophie Brun (sophieb)

The following contributors were added as Debian Maintainers in the last two months:

  • Douglas Andrew Torrance
  • Marcel Fourné
  • Marcos Talau
  • Sebastian Geiger


Planet DebianJoachim Breitner: A Candid explainer: Language integration

This is the forth post in a series about the interface description language Candid.

Now for something completely different: How does Candid interact with the various host languages, i.e. the actual programming languages that you write your services and clients in?

There are two facets to that question:

  1. How is the Candid interface represented inside the language?

    Some languages with rich type systems can express all relevant information about a Candid method or service within its own type system, and then the concrete serialization/deserialization code can be derived from that type (e.g. using type classes in Haskell, Traits in Rust, or built into the compiler in Motoko).

    Other languages have a less rich type system (e.g. C), no type-driven generic programming or are simply dynamically typed. In these cases, the Candid type has to be either transformed into specific code for that service by some external tool (e.g. JavaScript and TypeScript) or the Candid description has to be parsed and interpreted at runtime.

    Either approach will give rise to a type mapping between the Candid types and the host language types. Developers will likely have to know which types correspond to which, which is why the Candid manual’s section on types explicitly lists that.

  2. How is the Candid interface description produced and consumed?

    This is maybe the even more important question; what comes first: The code written in the host language, or the Candid description. There are multiple ways to tackle this, all of which have their merits, so I let’s look at some typical approaches.

Generating candid from the host language

In many case you don’t care too much about the interface of your service, and you just want to write the functionality, and get the interface definition for free. This is what you get when you write Motoko services, where the compiler calculates the Candid interface based on the Motoko types of your actor methods, and the build tool (dfx) puts that Candid file where it needs to go. You can thus develop services without ever writing or even looking at Candid type definitions.

The Candid library for Rust supports that mode as well, although you have to add some macros to your program to use it.

A downside of this model is that you have only indirect control over the generated Candid. Since it is type-driven, whenever there is more than one sensible Candid type for a given host language type, the translation tool has to make a choice, and if that does not suit you, that can be a problem.

In the case of Motoko we were able to co-design it with Candid, and their type systems are similar enough that this works well in practice. We have a specification of Candid-Motoko-type-mappings, and the type export from from Motoko to Candid is almost surjective. (Almost, because of Candid’s float32 type, which Motoko simply does not have, and because of service types with methods names that are not valid Motoko identifiers.)

Checking host language against Candid

The above works well when you (as the service developer) get to define the service’s interface as you go. But sometimes you want to develop a service that adheres to a given Candid interface. For example, in order to respond to HTTP requests in an Internet Computer service, you should provide a method http_request that implements this interface (simplified):

type HeaderField = record { text; text; };

type HttpRequest = record {
  method: text;
  url: text;
  headers: vec HeaderField;
  body: blob;

type HttpResponse = record {
  status_code: nat16;
  headers: vec HeaderField;
  body: blob;

service : {
  http_request: (request: HttpRequest) -> (HttpResponse) query;

Here, a suitable mode of operation is to generate the Candid description of the service that you built, and then compare it against this expected interface with a tool that implements the Candid subtyping relation. This would then complain if what you wrote was not compatible with the above interface. The didc check tool that comes with the Rust library can do that. If your service has to implement multiple such pre-defined interfaces, its actual interface will end up being a subtype of each of these interfaces.

Importing Candid

If you already have a Candid file, in particular if you are writing a client that wants to talk to an existing service, you can also import that Candid file into your language. This is a very common mode of operation for such interface descriptions languages (e.g. Protobuf). The details depend a lot on the host language, though:

  • In Motoko, you can import typed actor references: Just write import C "canister:foo" where foo is the name of another canister in your project, and your build tool (dfx) will pass the Candid interface of foo to the Motoko compiler, which then translates that Candid service type into a Motoko actor type for you to use.

    The mapping from Candid types to Motoko types is specified in as well, via the function i(…).

    The vision was always that you can import a reference to any canister on the Internet Computer this way (import C "ic:7e6iv-biaaa-aaaaf-aaada-cai"), and the build tool would fetch the interface automatically, but that has not been implemented yet, partly because of disagreement about how canisters expose their interface (see below). The Motoko compiler is ready, though, as described in its build tool interface specification.

    What is still missing in Motoko is a way to import a Candid type alone (without a concrete canister reference), to use the types somewhere (in function arguments, or to assert the type of the full canister).

  • The Rust library does not support importing Candid, as far as I know. If you write a service client in Rust, you have to know which Rust types map to the right Candid types, and manually get that right.

  • For JavaScript and TypeScript, the didc tool that comes with the Rust library can take a Candid file and produce JS resp. TS code that gives you an object representing the service with properly typed methods that you can easily interact with.

  • With the Haskell Candid library you can import Candid inline or from a file, and it uses metaprogramming (Template Haskell) to generate suitable Haskell types for you, which you can encode/decode at. This is for example used in the test suite of the Internet Identity, which executes that service in a simulated Internet Computer environment.

Generic and dynamic use

Finally, it’s worth mentioning that Candid can be used generically and dynamically:

  • Since Canisters can indicate their interface, a website like that enumerates Canisters can read that interface, and provide a fully automatically generated UI for them. For example, you can not only see the Candid interface of the Internet Identity canister at, but actually interact with it!

  • Similarly, during development of a service, it is very useful to be able to interact with it before you have written your actual front-end. The Candid UI tool provides that functionality, and can be used during local development or on-chain. It is also integrated into the Internet Computer playground.

  • Command line tools like dfx allow you to make calls to services, even without having their actual Candid interface description around. However, such dynamic use was never part of the original design of Candid, so it is a bit rough around the edges – the textual format is a bit unwieldy, you need to get the types right, and field name in the responses may be missing.

Do you want to know why field names are missing then? Then check out the next and final post of this series, where I discuss a number of Candid’s quirks.

Cryptogram Designing Contact-Tracing Apps

Susan Landau wrote an essay on the privacy, efficacy, and equity of contract-tracing smartphone apps.

Also see her excellent book on the topic.

Worse Than FailureCodeSOD: A Coded Escape

When evaluating a new development tool or framework, the first thing the smart developer does is check out the vendor documentation. Those docs will either be your best friend, or your worst enemy. A great product with bad documentation will provide nothing but frustration, but even a mediocre product with clean and clear documentation at least lets you get useful stuff done.

Stuart Longland's company has already picked the product, unfortunately, so Stuart's left combing through the documentation. This particular product exposes a web-socket based API, and thus includes JavaScript samples in its documentation. Obviously, one could use any language they like to talk to web-sockets, but examples are nice…

webSocket.onopen = function () { let session_id = "SESSION-ID"; // Received from API create session request let network_id = "NETWORK-ID"; // Received from API create session request let reference = "REFERENCE-ID"; // Reference handle created by client to link messages to relevant callbacks let wire = 1; // Connection ID, incremental value to identify messages of network/connection let type = 1; // Client type, use value 1 (FRONTEND) const OPEN = JSON.stringify({ "method": "open", "id": network_id, "session": session_id, "ref": reference, "wire": wire, "type": type }); this.send(decodeURIComponent(escape(OPEN))); };

This all seems mostly reasonable until you get to the last line:


escape is a deprecated method similar to encodeURIComponent. So this encodes our JSON string, then decodes it, then sends it over the web-socket. Which seems… like a useless step. And it probably is- this is probably a developer's brain-fart that happened to end up in the code-base, and then later on, ended up in the documentation.

But it might not be. Because escape and encodeURIComponent are not the same method. They don't escape characters the same way, because one of them is unicode compliant, and the other isn't.

escape('äöü'); //"%E4%F6%FC" encodeURIComponent('äöü'); //"%C3%A4%C3%B6%C3%BC"

And that unicode awareness goes for the inverse method, too.

unescape(escape('äöü')); //outputs "äöü" decodeURIComponent(encodeURIComponent('äöü')); //outputs "äöü" unescape(encodeURIComponent('äöü')); //outputs "äöü" decodeURIComponent(escape('äöü')); // throws a URIError exception

Now, it's unlikely that this particular JSON message contains any characters that would cause any problems- REFERENCE-ID, SESSION-ID and the others are probably just long hex-strings. So in real-world use, this probably would never cause an actual problem.

But in the situations where it does, this would create a surprising and probably annoying to debug glitch.

Character encodings are hard, but good documentation is even harder.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Planet DebianRuss Allbery: DocKnot 5.00

This release is the culmination of a project that I've been wanting to do for two years, but nearly all the work was done in the past week. That experience made me rethink some of my skepticism, but I'll get to that part of the story later.

In March of 1999, I got tired of writing HTML by hand and wrote a small program called spin that implemented a macro language that translated into HTML. This makes it one of the oldest programs for which I have a continuous development history, predating podlators by three months. I think only News::Gateway (now very dormant) and Term::ANSIColor (still under active development but very stable) are older, as long as I'm not counting orphaned packages like newsyslog.

I've used spin continuously ever since. It's grown features and an ecosystem of somewhat hackish scripts to do web publishing things I've wanted over the years: journal entries like this one, book reviews, a simple gallery (with some now-unfortunate decisions about maximum image size), RSS feeds, and translation of lots of different input files into HTML. But the core program itself, in all those years, has been one single Perl script written mostly in my Perl coding style from the early 2000s before I read Perl Best Practices.

My web site is long overdue for an overhaul. Just to name a couple of obvious problems, it looks like trash on mobile browsers, and I'm using URL syntax from the early days of the web that, while it prompts some nostalgia for tildes, means all the URLs are annoyingly long and embed useless information such as the fact each page is written in HTML. Its internals also use a lot of ad hoc microformats (a bit of RFC 2822 here, a text-based format with significant indentation there, a weird space-separated database) and are supported by programs that extract meaning from human-written pages and perform automated updates to them rather than having a clear separation between structure and data.

This will be a very large project, but it's the sort of quixotic personal project that I enjoy. Maintaining my own idiosyncratic static site generator is almost certainly not an efficient use of my time compared to, say, converting everything to Hugo. But I have 3,428 pages (currently) written in the thread macro language, plus numerous customizations that cater to my personal taste and interests, and, most importantly, I like having a highly customized system that I know exactly how to automate.

The blocker has been that I didn't want to work on spin as it existed. It badly needed a structural overhaul and modernization, and even more badly needed a test suite since every release involved tedious manual testing by pouring over diffs between generations of the web site. And that was enough work to be intimidating, so I kept putting it off.

I've separately been vaguely aware that I have been spending too much time reading Twitter (specifically) and the news (in general). It would be one thing if I were taking in that information to do something productive about it, but I haven't been. It's just doomscrolling. I've been thinking about taking a break for a while but it kept not sticking, so I decided to make a concerted effort this week.

It took about four days to stop wanting to check Twitter and forcing myself to go do something else productive or at least play a game instead. Then I managed to get started on my giant refactoring project, and holy shit, Twitter has been bad for my attention span! I haven't been able to sustain this level of concentration for hours at a time in years. Twitter's not the only thing to blame (there are a few other stressers that I've fixed in the past couple of years), but it's obviously a huge part.

Anyway, this long personal ramble is prelude to the first release of DocKnot that includes my static site generator. This is not yet the full tooling from my old web tools page; specifically, it's missing faq2html, cl2xhtml, and cvs2xhtml. (faq2html will get similar modernization treatment, cvs2xhtml will probably be rewritten in Perl since I have some old, obsolete scripts that may live in CVS forever, and I may retire cl2xhtml since I've stopped using the GNU ChangeLog format entirely.) But DocKnot now contains the core of my site generation system, including the thread macro language, POD conversion (by way of Pod::Thread), and RSS feeds.

Will anyone else ever use this? I have no idea; realistically, probably not. If you were starting from scratch, I'm sure you'd be better off with one of the larger and more mature static site generators that's not the idiosyncratic personal project of one individual. It is packaged for Debian because it's part of the tool chain for generating files (specifically that are included in every package I maintain, and thus is part of the transitive closure of Debian main, but I'm not sure anyone will install it from there for any other purpose. But for once making something for someone else isn't the point. This is my quirky, individual way to maintain web sites that originated in an older era of the web and that I plan to keep up-to-date (I'm long overdue to figure out what they did to HTML after abandoning the XHTML approach) because it brings me joy to do things this way.

In addition to adding the static site generator, this release also has the regular sorts of bug fixes and minor improvements: better formatting of software pages for software that's packaged for Debian, not assuming every package has a TODO file, and ignoring Autoconf 2.71 backup files when generating distribution tarballs.

You can get the latest version of DocKnot from CPAN as App-DocKnot, or from its distribution page. I know I haven't yet updated my web tools page to reflect this move, or changed the URL in the footer of all of my pages. This transition will be a process over the next few months and will probably prompt several more minor releases.

Cryptogram Security Risks of Relying on a Single Smartphone

Isracard used a single cell phone to communicate with credit card clients, and receive documents via WhatsApp. An employee stole the phone. He reformatted the phone and replaced the SIM card, which was oddly the best possible outcome, given the circumstances. Using the data to steal money would have been much worse.

Here’s a link to an archived version.

Planet DebianVincent Bernat: Short feedback on Cisco pyATS and Genie Parser

Cisco pyATS is a framework for network automation and testing. It includes, among other things, an open-source multi-vendor set of parsers and models, Genie Parser. It features 2700 parsers for various commands over many network OS. On the paper, this seems a great tool!

>>> from genie.conf.base import Device
>>> device = Device("router", os="iosxr")
>>> # Hack to parse outputs without connecting to a device
>>> device.custom.setdefault("abstraction", {})["order"] = ["os", "platform"]
>>> cmd = "show route ipv4 unicast"
>>> output = """
... Tue Oct 29 21:29:10.924 UTC
... O [110/2] via, 5d23h, GigabitEthernet0/0/0/0.110
... """
>>> device.parse(cmd, output=output)
{'vrf': {'default': {'address_family': {'ipv4': {'routes': {'': {'route': '',
       'active': True,
       'route_preference': 110,
       'metric': 2,
       'source_protocol': 'ospf',
       'source_protocol_codes': 'O',
       'next_hop': {'next_hop_list': {1: {'index': 1,
          'next_hop': '',
          'outgoing_interface': 'GigabitEthernet0/0/0/0.110',
          'updated': '5d23h'}}}}}}}}}}

First deception: pyATS is closed-source with some exceptions. This is quite annoying if you run into some issues outside Genie Parser. For example, although pyATS is using the ssh command, it cannot leverage my ssh_config file: pyATS resolves hostnames before providing them to ssh. There is no plan to open source pyATS. 🤨

Then, Genie Parser has two problems:

  1. The data models used are dependent on the vendor and OS, despite the documentation saying otherwise. For example, the data model used for IPv4 interfaces is different between NX-OS and IOS-XR.
  2. The parsers rely on line-by-line regular expressions to extract data and some Python code as glue. This is fragile and may break silently.

To illustrate the second point, let’s assume the output of show ipv4 vrf all interface is:

  Loopback10 is Up, ipv4 protocol is Up
    Vrf is default (vrfid 0x60000000)
    Internet protocol processing disabled
  Loopback30 is Up, ipv4 protocol is Down [VRF in FWD reference]
    Vrf is ran (vrfid 0x0)
    Internet address is
    MTU is 1500 (1500 is available to IP)
    Helper address is not set
    Directed broadcast forwarding is disabled
    Outgoing access list is not set
    Inbound  common access list is not set, access list is not set
    Proxy ARP is disabled
    ICMP redirects are never sent
    ICMP unreachables are always sent
    ICMP mask replies are never sent
    Table Id is 0x0

Because the regular expression to parse an interface name does not expect the extra data after the interface state, Genie Parser ignores the line starting the definition of Loopback30 and parses the output to this structure:1

  "Loopback10": {
    "int_status": "up",
    "oper_status": "up",
    "vrf": "ran",
    "vrf_id": "0x0",
    "ipv4": {
      "": {
        "ip": "",
        "prefix_length": "32"
      "mtu": 1500,
      "mtu_available": 1500,
      "broadcast_forwarding": "disabled",
      "proxy_arp": "disabled",
      "icmp_redirects": "never sent",
      "icmp_unreachables": "always sent",
      "icmp_replies": "never sent",
      "table_id": "0x0"

While this bug is simple to fix, this is an uphill battle. Any existing or future slight variation in the output of a command could trigger another similar undetected bug, despite the extended test coverage. I have reported and fixed several other silent parsing errors: #516, #529, and #530. A more robust alternative would have been to use TextFSM and to trigger a warning when some output is not recognized, like Batfish, a configuration analysis tool, does.

In the future, we should rely on YANG for data extraction, but it is currently not widely supported. SNMP is still a valid possibility but much information is not accessible through this protocol. In the meantime, I would advise you to only use Genie Parser with caution.

  1. As an exercise, the astute reader is asked to write the code to extract the IPv4 from this structure. ↩︎

David BrinDemolition of America's moral high ground

In an article smuggled out of the gulag, Alexei Navalny makes - more powerfully - a point I have shouted for decades... that corruption is the very soul of oligarchy and the only way to fight it is with light. And if that light sears out the cheating, most of our other problems will be fixable by both bright and average humans... citizens... negotiating, cooperating, competing based on facts and goodwill. With the devils of our nature held in check by the only thing that ever worked...


Don't listen to me? Fine. Heed a hero.

Alas, the opposite trend is the one with momentum, favoring rationalizing monsters. Take this piece of superficially good news -- "Murdoch empire's News Corp. pledges to support Zero Emissions by 2030!"

Those of you who see this as a miraculous turnaround, don't. They always do this. "We NEVER said cars don't cause smog! We NEVER said tobacco is good for you! We NEVER said burning rivers are okay! We NEVER said civil rights was a commie plot! We NEVER said Vietnam and Iraq and Afghanistan quagmires will turn out well! We NEVER said the Ozone Crisis was fake!..."
... plus two dozen more examples of convenient amnesia that I list in Polemical Judo.
Now this 'turnaround?' As fires and storms and droughts make Denialism untenable even for raving lunatics and the real masters with real estate holdings in Siberia? so now, cornered by facts, many neural deprived confeds swerve into End Times Doomerism? No, we will not forget.

== More about Adam Smith... the real genius, not the caricature ==

I've long held that we must rescue the fellow who might (along with Hume and Locke) be called the First Liberal, in that he wanted liberated markets for labor, products, services and capital so that all might prosper... and if all do not prosper, then something is wrong with the markets. 

Smith denounced the one, central failure mode that went gone wrong 99% of the time, in most cultures; that has been cheating by those with power and wealth, suppressing fair competition so their brats would inherit privileges they never earned.

6000 years show clearly that cheating oligarchs, kings, priests, lords, owners are far more devastating to flat-fair-creative markets than "socialism" ever was. (Especially if you recognize the USSR was just another standard Russian Czardom with commissar-boyars and a repainted theology.) Whereas Smith observes that “the freer and more general the competition,” the greater will be “the advantage to the public.”

Here in Religion and the Rise of Capitalism the rediscovery of Smith is taken further, by arguing his moral stance was also, in interesting ways, theological.

== Now about that Moral High Ground ==

The demolition of USA's moral high ground - now aided by the most indignantly self-righteous generation of youths since the Boomers - is THE top priority of our enemies.  

Let me be clear, this is pragmatically devastating! As I have pointed out six times in trips to speak at DC agencies, it's a calamity, not because we don't need to re-evaluate and re-examine faults and crimes - (we do!) - but because that moral high ground is a top strategic asset in our fight to champion a free and open future when moral matters will finally actually count.

In those agency talks, I point out one of the top things that helped us to survive the plots and schemes of the then-and-future KGB, whose superior spycraft and easy pickings in our open society left us at a huge disadvantage.  What was it that evened the playing field for us? 

Defectors. They'd come in yearly, monthly ... and once a decade, some defector would bring in a cornucopia of valuable intel. Beyond question, former KGB agent Vlad Putin made it his absolute top priority to ensure that will not happen during Round Two. He has done it by systematically eliminating the three things we offered would be defectors --

- Safety....
- Good prospects in the West... and...
- The Moral High Ground.

Safety was the first thing Putin openly and garishly attacked, with deliberately detectable/attributable thuggery, in order to terrify. The other two lures have been undermined with equal systematicity, by fifth columns inside the U.S. and the West, especially as Trumpism revealed what America can be like, when our dark, confederate side wins one of the phases of our 250 year ongoing civil war. It has enabled Putin and other rivals to sneer "Who are YOU to lecture us about anything?"...

... And fools on the left nod in agreement, yowling how awful we are, inherently... when a quarter of the world's people would drop everything to come here, if they could. 

(Dig it, dopes. You want the narrative to be "we're improvable and America's past, imperfect progress shows it can happen!" But the sanctimoniously destructive impulse is to yowl "We're horrible and irredeemable!")

But then, we win that high ground back with events like the Olympics, showing what an opportunity rainbow we are. And self-crit -- even when unfairly excessive -- is evidence of real moral strength.

== Evidence? ==

This article from The Atlantic, History will judge the complicit, by Anne Applebaum, discusses how such a Fifth Column develops in a nation, collaborators willing, even eager, to assist foreign enemies against democracy and the rule of law. (I addressed much of this in Polemical Judo.)

"...many of those who became ideological collaborators were landowners and aristocrats, “the cream of the top of the civil service, of the armed forces, of the business community,” people who perceived themselves as part of a natural ruling class that had been unfairly deprived of power under the left-wing governments of France in the 1930s. Equally motivated to collaborate were their polar opposites, the “social misfits and political deviants” who would, in the normal course of events, never have made successful careers of any kind. What brought these groups together was a common conclusion that, whatever they had thought about Germany before June 1940, their political and personal futures would now be improved by aligning themselves with the occupiers."

== And now… from crazy town … ==

Turkey’s leader met two E.U. presidents. The woman among them didn’t get a chair.

And here’s an interesting look at the early fifties, showing an amazing overlap between UFO stuff and the plague of McCarthyism. And it’s stunning how similar the meme plagues were, to today. “On any given night, viewers of the highest-rated show in the history of cable news, Fox News Channel’s Tucker Carlson Tonight, might find themselves treated to its namesake host discussing flying saucers and space aliens alongside election conspiracies and GOP talking points. Praise for former President Donald Trump, excuses for those involved in the Capitol assault, and criticism of racial and sexual minorities can sit seamlessly beside occasional interviews featuring UFO “experts” pleading conspiracy. Recent segments found Carlson speculating that an art installation in Utah was the work of space aliens and interviewing a reporter from the Washington Examiner about whether UFOs can also travel underwater like submarines.”

I do not like these Putin shills

I do not like indignant shrills

From Foxite liars aiming barbs

At every elite except thars.

Lecture us when mafia lords... moguls and commie hordes

Petro sheiks and inheritance brats

And despots and their apparats

Don't rule the GOP with help

From uncle toms who on-cue yelp!

Your all-out war on expert castes

has one goal, lordship that lasts!

And finally

...showing that we aren't the only ones... Dolphins chew on toxic puffer fish and pass them around, as stoners do with a joint.

Planet DebianRuss Allbery: Pod::Thread 3.00

This Perl module converts POD documents to thread, the macro language I use for my static site builder (about which, more in another software release coming up shortly). This release fixes a whole ton of long-standing problems.

  • Remove support for parsing CVS $Id$ strings and emitting \id commands.
  • Stop wrapping output text.
  • Fix internal section links.
  • Fix conversion of nested =over blocks.
  • Fix conversion of =over blocks without an =item.
  • Fix links to headings containing markup.
  • Fix metacharacter escaping in headings, contents, and navbar.
  • Fix handling of =for thread and =begin thread.
  • Fix non-breaking spaces in navbar section names.
  • Fix internal links whose anchors are wrapped.
  • Output the document before erroring out if POD errors were seen.
  • Suppress \signature if the input document was empty.
  • Add bugtracker, homepage, and repository package metadata.

In brief, a lot of the POD implementation was previously done by chasing bugs rather than testing comprehensively, as reflected by the 65% code coverage in the previous release. The test suite now achieves about 95% code coverage (most of the rest is obscure error handling around encoding) and cleans up a bunch of long-standing problems with internal links.

I had previously punted entirely on section links containing markup, and as a result the section links shown in the navigation bar or table of contents were missing the markup and headings containing thread metacharacters were mangled beyond recognition. That was because I was trying to handle resolving links using regexes (after I got rid of the original two-pass approach that required a driver script). This release uses Text::Balanced instead, the same parsing module used by my static site generator, to solve the problem (mostly) correctly. (I think there may still be a few very obscure edge cases, but I probably won't encounter them.)

The net result should hopefully be better conversion of my software documentation, including INN documentation, when I regenerate my site (which will be coming soon).

You can get the latest release from CPAN or from the Pod::Thread distribution page.


Planet DebianJohn Goerzen: Facebook Is Censoring People For Mentioning Open-Source Social Network Mastodon

Update: Facebook has reversed itself over this censorship, but I maintain that whether the censorship was algorithmic or human, it was intentional either way. Details in my new post.

Last November, I made a brief post to Facebook about Mastodon. Mastodon is an open-source and open social network, which is decentralized and all about user control instead of corporate control. I’ve blogged about Mastodon and the dangers of Facebook before, but rarely mentioned Mastodon on Facebook itself.

Today, I received this notice that Facebook had censored my post about Mastodon:

Facebook censoring a post

Wonder with me for a second what this one-off post I composed myself might have done to trip Facebook’s filter…. and it is probably obvious that what tripped the filter was the mention of an open source competitor, even though Facebook is much more enormous than Mastodon. I have been a member of Facebook for many years, and this is the one and only time anything like that has happened.

Why they decided today to take down that post – I have no idea.

In case you wondered about their sincerity towards stamping out misinformation — which, on the rare occasions they do something about, they “deprioritize” rather than remove as they did here — this probably answers your question. Or, are they sincere about thinking they’re such a force for good by “connecting the world’s people?” Well, only so long as the world’s people don’t say nice things about alternatives to Facebook, I guess.

“Well,” you might be wondering, “Why not appeal, since they obviously made a mistake?” Because, of course, you can’t:

Indeed I did tick a box that said I disagreed, but there was no place to ask why or to question their action.

So what would cause a non-controversial post from a long-time Facebook member that has never had anything like this happen, to disappear?

Greed. Also fear.

Maybe I’d feel sorry for them if they weren’t acting like a bully.

Edit: There are reports from several others on Mastodon of the same happening this week. I am trying to gather more information. It sounds like it may be happening on Twitter as well.

Edit 2: And here are some other reports from both Facebook and Twitter. Definitely not just me.

Edit 3: While trying to reply to someone on Facebook, that was trying to defend Facebook, I mentioned and got this:

Anyone else seeing it?

Edit 4: It is far more than just me, clearly. More reports are out there; for instance, this one and that one.

Sam VargheseWhen will 9/11 mastermind get his day in court?

Twenty years after the attacks on the World Trade Center in New York, the mastermind of the attack, Khalid Shaikh Mohammed, has still not been put on trial despite having been arrested in March 2003.

KSM, as he is known, was picked up by the Pakistani authorities in Rawalpindi. Just prior to his arrest, the other main actor in the planning of the attacks, Ramzi Binalshibh, was picked up, again in Pakistan, this time in Karachi.

A report says KSM, Ramzi and three others appeared in court on Tuesday, 7 September. KSM was reported to be confident, talking to his lawyers and defying the judge’s instruction to wear a mask.

AFP quoted Tore Hamming, a Danish expert on militant Islam, as saying: “He is definitely considered as a legendary figure and one of the masterminds behind 9/11.

“That said, it is not like KSM is often discussed, but occasionally he features in written and visual productions.”

KSM has claimed to be behind 30 incident apart from the Trade Centre attack, including bombings in Bali in 2002 and Kenya in 1998. He also claims to have killed the American Jewish journalist Daniel Pearl in Pakistan in 2002.

A bid by his lawyers in 2017 to enter a guilty plea in exchange for a life sentences appears to have come to naught due to opposition from the American Government.

Apart from KSM and Ramzi, three others — Walid bin Attash, Ammar al-Baluchi and Mustafa al Hawsawi — also appeared at Tuesday’s hearing in the courtroom of Guantanamo Bay Naval Base’s “Camp Justice”.

The hearing was adjourned hours ahead of the time it was supposed to conclude, due to controversy whether the military judge hearing the case was qualified to be in charge. The US has set up a military commission to try those arrested for the attacks in order to deny them the basic rights that are afforded to people tried under the regular American system.

Apart from this anomaly, there has been no effort by any media organisation to find out why a large number of Saudis were allowed to leave the US soon after the attacks, even though there was a blanket ban on any flights taking off in the US.

American airspace was shut down after the attacks on September 11.

Fifteen of the 19 hijackers were Saudis, two were from the UAE and one each from Egypt and Lebanon. Despite this, there has been no attempt by Washington to ask the Saudis for any explanation about the involvement of Saudi citizens in the plot.

Finally, though there has been huge volume of material — both words and video — about the incident, only one book has been written exposing the actual plot and the people behind it.

That tome, Masterminds of Terror, [cover above] was written by Yosri Fouda of Al Jazeerah and Nick Fielding of the The Times in 2003. It is a remarkable work, not even 200 pages, but encapsulating a massive amount of correct information.

One wonders when an American writer will sit down to write something substantial about the incident, one that has changed the US in a rather significant way.

Sam VargheseWar on terror has nothing to do with the rise of Trump

As the US marks the 20th anniversary of the attacks on the World Trade Centre, a theory, that can only be classified as unadulterated BS, has been advanced: the event led to the invasion of Afghanistan and Iraq which in turn led to the emergence of Donald Trump.

Such a narrative sits nicely with Democrats: the election of the worst US president, a Republican, was caused by the actions of another Republican president, George W. Bush.

Part of this logic — if you can call it that — is that Trump’s opposition to the wars launched by Bush put paid to the chances of his brother, Jeb, gaining the Republican nomination.

There is, however, no evidence to show that if Jeb Bush had been the Republican nominee that he would have beaten Hillary Clinton – as Trump did. But that does not matter; had Jeb made the grade, then Trump would have not been in the picture.

This theory could not be more flawed. The emergence of Trump was due to just one thing: after decades of being cheated by both parties, Americans were willing to give anyone who did not represent the establishment a chance. And Trump painted himself as a maverick right from the start.

Both Democrats and Republicans are in thrall to big corporations from whom they get money to contest elections. The interests of the average American come a poor second.

Under both establishment Republicans and Democrat presidents, the average American has grown poorer and seen more and more of his/her rights taken away.

One example is the minimum wage which has not been changed since 2009 when it was US$7.25 an hour. Meanwhile, the wealth of the top 1% has grown by many orders of magnitude.

Again, the reason why Joe Biden defeated Trump in 2020 was because he promised to fix up some basics in the country: the lack of proper medical care, the minimum wage and the massive student loans.

Biden was helped by the fact that Trump showed all his talk of helping the common man was so much bulldust, giving the wealthiest Americans a massive tax cut during his four years in the White House.

But after coming to office, Biden has done nothing, breaking all these promises. Trump has said he will run again, but even if he does not, any candidate who has his blessings will win the presidency in 2024.

In the meantime, Democrat supporters can keep spinning any narrative they want. We all need our little delusions to live in this world.

Sam VargheseKilling people remotely: the fallout of the US war on terror

National Bird is a disturbing documentary. It isn’t new, having been made in 2016, but it outlines in stark detail the issues that are part and parcel of the drone program which the US has used to kill hundreds, if not thousands, of people in Afghanistan, Pakistan, Iraq and a number of other countries.

The use of remote killing was even seen recently after a bomb went off at Kabul Airport following the US withdrawal from Afghanistan. There were boasts that two people responsible for the blast had been killed by a drone – only for the truth to emerge later.

And that was that the people killed were in no way connected to the blast. Using faulty intelligence and an over-quick finger, America had pulled the trigger again and killed innocents.

The number of people killed by drone strikes shot up by a huge number during the eight years that Barack Obama was in office. The man who spoke a lot about hope and change also killed people left, right and centre, without so much as blinking an eye.

National Bird covers the tales of three drone operators in the US; they are part of the kill chain, with other US officials involved in pulling the trigger. In one case, that of a woman, it has led to post-traumatic stress disorder, which has been officially acknowledged, making her eligible for financial aid. This woman has never set foot in a battlefield; she has been monitoring drone footage at a desk.

A third drone operator, a man, is now on the run at the time the film was made, because he revealed details of the operation which are, as always, supposed to stay secret.

The producer of the documentary, Sonia Kennebeck, is a remarkable woman. In an interview, she tells of the difficulties involved in making National Bird, the precautions she had to take and the legal niceties she had to observe to avoid getting hit with lawsuits. Her story is an inspiring one.

As many countries mark the 20th anniversary of the terrorist attacks on the US in September 2001, one must always bear in mind that the fallout of that day has, in many ways, ended up being worse than the day itself.


Krebs on SecurityKrebsOnSecurity Hit By Huge New IoT Botnet “Meris”

On Thursday evening, KrebsOnSecurity was the subject of a rather massive (and mercifully brief) distributed denial-of-service (DDoS) attack. The assault came from “Meris,” the same new botnet behind record-shattering attacks against Russian search giant Yandex this week and internet infrastructure firm Cloudflare earlier this summer.

Cloudflare recently wrote about its attack, which clocked in at 17.2 million bogus requests-per-second. To put that in perspective, Cloudflare serves over 25 million HTTP requests per second on average.

In its Aug. 19 writeup, Cloudflare neglected to assign a name to the botnet behind the attack. But on Thursday DDoS protection firm Qrator Labs identified the culprit — “Meris” — a new monster that first emerged at the end of June 2021.

Qrator says Meris has launched even bigger attacks since: A titanic and ongoing DDoS that hit Russian Internet search giant Yandex last week is estimated to have been launched by roughly 250,000 malware-infected devices globally, sending 21.8 million bogus requests-per-second.

While last night’s Meris attack on this site was far smaller than the recent Cloudflare DDoS, it was far larger than the Mirai DDoS attack in 2016 that held KrebsOnSecurity offline for nearly four days. The traffic deluge from Thursday’s attack on this site was more than four times what Mirai threw at this site five years ago. This latest attack involved more than two million requests-per-second. By comparison, the 2016 Mirai DDoS generated approximately 450,000 requests-per-second.

According to Qrator, which is working with Yandex on combating the attack, Meris appears to be made up of Internet routers produced by MikroTik. Qrator says the United States is home to the most number of MikroTik routers that are potentially vulnerable to compromise by Meris — with more than 42 percent of the world’s MikroTik systems connected to the Internet (followed by China — 18.9 percent– and a long tail of one- and two-percent countries).

The darker areas indicate larger concentrations of potentially vulnerable MikroTik routers. Qrator says there are about 328,000 MikroTik devices currently responding to requests from the Internet. Image: Qrator.

It’s not immediately clear which security vulnerabilities led to these estimated 250,000 MikroTik routers getting hacked by Meris.

“The spectrum of RouterOS versions we see across this botnet varies from years old to recent,” the company wrote. “The largest share belongs to the version of firmware previous to the current stable one.”

Qrator’s breakdown of Meris-infected MikroTik devices by operating system version.

It’s fitting that Meris would rear its head on the five-year anniversary of the emergence of Mirai, an Internet of Things (IoT) botnet strain that was engineered to out-compete all other IoT botnet strains at the time. Mirai was extremely successful at crowding out this competition, and quickly grew to infect tens of thousands of IoT devices made by dozens of manufacturers.

And then its co-authors decided to leak the Mirai source code, which led to the proliferation of dozens of Mirai variants, many of which continue to operate today.

The biggest contributor to the IoT botnet problem — a plethora of companies white-labeling IoT devices that were never designed with security in mind and are often shipped to the customer in default-insecure states — hasn’t changed much, mainly because these devices tend to be far cheaper than more secure alternatives.

The good news is that over the past five years, large Internet infrastructure companies like Akamai, Cloudflare and Google (which protects this site with its Project Shield initiative) have heavily invested in ramping up their ability to withstand these outsized attacks [full disclosure: Akamai is an advertiser on this site].

More importantly, the Internet community at large has gotten better at putting their heads together to fight DDoS attacks, by disrupting the infrastructure abused by these enormous IoT botnets, said Richard Clayton, director of Cambridge University’s Cybercrime Centre.

“It would be fair to say we’re currently concerned about a couple of botnets which are larger than we have seen for some time,” Clayton said. “But equally, you never know they may peter out. There are a lot of people who spend their time trying to make sure these things are hard to keep stable. So there are people out there defending us all.”

Planet DebianPatrick Matthäi: kdenlive / mlt status in Debian Bullseye

Debian 11 (Bullseye) comes with the mlt framework 6.24.0 and kdenlive 20.12.3. Unfortunately it was already too late (freeze) to build mlt with enabled OpenCV features, this are also required to use the motion tracker features, which are now still missing in the pure stable release.

So I have just uploaded, migrated and backported mlt 6.26.1 along with kdenlive 21.04.3 to bullseye-backports with opencv support. :-)

If you want to install it, just add the backports repository and install the new version with:

apt install kdenlive/bullseye-backports

Planet DebianEnrico Zini: A nightmare of confcalls and microphones

I had this nightmare where I had a very, very important confcall.

I joined with Chrome. Chrome said Failed to access your microphone - Cannot use microphone for an unknown reason. Could not start audio source.

I joined with Firefox. Firefox chose Monitor of Built-in Audio Analog Stereo as a microphone, and did not let me change it. Not in the browser, not in pavucontrol.

I joined with the browser on my phone, and the webpage said This meeting needs to use your microphone and camera. Select *Allow* when your browser asks for permissions. But the question never came.

I could hear people talking. I had very important things to say. I tried typing them in the chat window, but they weren't seeing it. The meeting ended. I was on the verge of tears.

Tell me, Mr. Anderson, what good is a phone call when you are unable to speak?

Since this nightmare happened for real, including the bit about tears in the end, let's see that it doesn't happen again. I should now have three working systems, which hopefully won't all break again all at the same time.

Fixing Chrome

I can reproduce this reliably, on Bullseye's standard Chromium 90.0.4430.212-1, just launched on an empty profile, no extensions.

The webpage has camera and microphone allowed. Chrome doesn't show up in the recording tab of pulseaudio. Nothing on Chrome's stdout/stderr.

JavaScript console has:

Logger.js:154 2021-09-10Txx:xx:xx.xxxZ [features/base/tracks] Failed to create local tracks
DOMException: Could not start audio source

I found the answer here:

I had the similar problem once with chromium. i could solve it by switching in preferences->microphone-> from "default" to "intern analog stereo".

Opening the little popup next to the microphone/mute button allows choosing other microphones, which work. Only "Same as system (Default)" does not work.

Fixing Firefox

I have firefox-esr 78.13.0esr-1~deb11u1. In Jitsi, microphone selection is disabled on the toolbar and in the settings menu. In pavucontrol, changing the recording device for Firefox has no effect. If for some reason the wrong microphone got chosen, those are not ways of fixing it.

What I found works is to click on the camera permission icon, remove microphone permission, then reload the page. At that point Firefox will ask for permission again, and that microphone selection seems to work.

Relevant bugs: on Jitsi and on Firefox. Since this is well known (once you find the relevant issues), I'd have appreciated Jitsi at least showing a link to an explanation of workarounds on Firefox, instead of just disabling microphone selection.

Fixing Jitsi on the phone side

I really don't want to preemptively give camera and microphone permissions to my phone browser. I noticed that there's the Jitsi app on F-Droid and much as I hate to use an app when a website would work, at least in this case it's a way to keep the permission sets separate, so I installed that.

Fixing pavucontrol?

I tried to find out why I can't change input device for FireFox on pavucontrol. I only managed to find an Ask Ubuntu question with no answer and a Unix StackExchange question with no answer.

Worse Than FailureError'd: Swordfish

Despite literally predating paper, passcodes and secret handshakes continue to perplex programmers, actors, and artists alike.

For our first example, auteur Andy stages a spare play in three acts.
Nagg: Hello I'm not the account owner and shouldn't be logged in to this account. Can you help me?
Nell: Sure, here are the owner's credit card details. Please use those to say that you are the account owner.



Fellow arts enthusiast Paul shares some Surrealism "snatched from the official Greek government site for emergency communications."



Web comics fan Geoff G. alludes to a classic (we all know which one) with his "Now I have two problems..."



While an anonymous gourmand mumbles, mouth full, "Unicode Tofu is my secrets ingredient!"



And oldworlder Jan declares this bit of user experience needs no words. "Who needs a description when you have clear symbols?" It's like a watercolor about a novel.



Finally, Dima R. brings down the curtain with this little mind blower. Quoth he: "It's secured by shibboleths." Or encrapted by wishes.



[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Planet DebianDirk Eddelbuettel: RcppSMC 0.2.5 on CRAN: Build Update

A week after the 0.2.5 release bringing the recent Google Summer of Code for RcppSMC to CRAN, we have a minor bug-fix release consistently, essentially, of one line. “Everybody’s favourite OS and toolchain” did not know what to make of pow(), and I seemingly failed to test there, so shame on me. But now all is good thanks to proper use of std::pow().

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is summarized below.

Changes in RcppSMC version 0.2.5 (2021-09-09)

  • Compilation under Solaris is aided via std::pow use (Dirk in #65 fixing #64)

Courtesy of my CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Friday Squid Blogging: Possible Evidence of Squid Paternal Care

Researchers have found possible evidence of paternal care among bigfin reef squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianDebian Social Team: Matrix Synapse updated and new plumbed IRC rooms

Matrix synapse was updated to 1.40.0, during the upgrade the server was upgraded to Bullseye.

The following Matrix rooms were plumbed to IRC:

  • debian-edu
  • skolelinux
  • debian-hams
  • debian-in

Planet DebianBits from Debian: DebConf21 online closes

DebConf21 group photo - click to enlarge

On Saturday 28 August 2021, the annual Debian Developers and Contributors Conference came to a close.

DebConf21 has been held online for the second time, due to the coronavirus (COVID-19) disease pandemic.

All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms.

With 740 registered attendees from more than 15 different countries and a total of over 70 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf21 was a large success.

The setup made for former online events involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, a web-based frontend for voctomix has been improved and used for DebConf21 successfully. All components of the video infrastructure are free software, and configured through the Video Team's public ansible repository.

The DebConf21 schedule included a wide variety of events, grouped in several tracks:

  • Introduction to Free Software and Debian,
  • Packaging, policy, and Debian infrastructure,
  • Systems administration, automation and orchestration,
  • Cloud and containers,
  • Security,
  • Community, diversity, local outreach and social context,
  • Internationalization, Localization and Accessibility,
  • Embedded and Kernel,
  • Debian Blends and Debian derived distributions,
  • Debian in Arts and Science
  • and other.

The talks have been streamed using two rooms, and several of these activities have been held in different languages: Telugu, Portuguese, Malayalam, Kannada, Hindi, Marathi and English, allowing a more diverse audience to enjoy and participate.

Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends.

The Debian publicity team did the usual «live coverage» to encourage participation with micronews announcing the different events. The DebConf team also provided several mobile options to follow the schedule.

For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days.

The DebConf21 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf22 is planned to be held in Prizren, Kosovo, in July 2022.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf21 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf21, particularly our Platinum Sponsors: Lenovo, Infomaniak, Roche, Amazon Web Services (AWS) and Google.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Roche

Roche is a major international pharmaceutical provider and research company dedicated to personalized healthcare. More than 100.000 employees worldwide work towards solving some of the greatest challenges for humanity using science and technology. Roche is strongly involved in publicly funded collaborative research projects with other industrial and academic partners and have supported DebConf since 2017.

About Amazon Web Services (AWS)

Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

Contact Information

For further information, please visit the DebConf21 web page at or send mail to

Worse Than FailureCodeSOD: Going Through Some Changes

Dave inherited a data management tool. It was an antique WinForms application that users would use to edit a whole pile of business specific data in one of those awkward "we implemented a tree structure on top of an RDBMS" patterns. As users made changes, their edits would get marked with a "pending" status, allowing them to be saved in the database and seen by other users, but with a clear "this isn't for real yet" signal.

One developer had a simple task: update a text box with the number of pending changes, and if it's non-zero, make the text red and boldfaced. This developer knew that some of these data access methods might not return any data, so they were very careful to "handle" exceptions.

int changes = 0; try { changes = DataNode.DataNodeDataSet(Convert.ToInt32(Status.PendingNew)).Tables[0].Rows.Count; } finally{} try { changes += DataVersion.GetVersionTable(Convert.ToInt32(Status.PendingNew)).Rows.Count; } finally{} try { changes += DataOrderVersion.DataOrderVersionDataSet(Convert.ToInt32(Status.PendingNew)).Tables[0].Rows.Count; } finally{} if (changes > 0) { Pending.Font.Bold = true; Pending.ForeColor = System.Drawing.Color.Red; } Pending.Text = changes.ToString();

The indentation is this way in the original source.

This… works. Almost. They set the changes variable to zero, then wrap all the attempts to access potentially null values in try blocks. In lieu of having an empty catch, which I suspect their linter was set to complain about, they opted to have an empty finally. Unfortunately, without that empty catch, the exception does get just tossed up the chain, meaning this doesn't work.

But even if it worked, I hate it.

And then there's the seemingly random indentation. Visual Studio automatically corrects the indentation for you! You have to work hard to get it messed up this badly!

In any case, I definitely have some pending changes for this code.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Planet DebianDirk Eddelbuettel: RcppSimdJson 0.1.6 on CRAN: New Upstream 1.0.0 !!

The RcppSimdJson team is happy to share that a new version 0.1.6 arrived on CRAN earlier today. Its release coincides with release 1.0.0 of simdjson itself, which is included in this release too!

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

This version brings the new upstream release, thanks to a comprehensive pull request by Daniel Lemire. The short NEWS entry follows.

Changes in version 0.1.6 (2021-09-07)

  • The C++17 dependency was stated more clearly in the DESCRIPTION file (Dirk)

  • The simdjson version was updated to release 1.0.0 (Daniel Lemire in #70)

We should point out that the package still has a dependency on C++17 even though simdjson no longer does. Some of our earlier wrapping code uses it, this could be changed. If you, dear reader, would like to work on this please get in touch.

Courtesy of my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianThorsten Alteholz: My Debian Activities in August 2021

FTP master

Yeah, Bullseye is released, thanks a lot to everybody involved!

This month I accepted 242 and rejected 18 packages. The overall number of packages that got accepted was 253.

Debian LTS

This was my eighty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2738-1] c-ares security update for one CVE
  • [DLA 2746-1] scrollz security update for one CVE
  • [DLA 2747-1] ircii security update for one CVE
  • [DLA 2748-1] tnef security update for one CVE
  • [DLA 2749-1] gthumb security update for one CVE
  • [DLA 2752-1] squashfs-tools security update for one CVE
  • buster-pu for gthumb #993228
  • prepared debdiffs for squashfs-tools in Buster and Bullseye, which will result in DSA 4967
  • prepared debdiffs for btrbk in Buster and Bullseye

I also started to work on openssl, grilo and had to process packages from NEW on security-master.

As the CVE of btrbk was later marked as no-dsa, an upload to stable and oldstable is needed now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-eighth ELTS month.

During my allocated time I uploaded:

  • ELA-474-1 for c-ares
  • ELA-480-1 for squashfs-tools

I also started to work on openssl.

Last but not least I did some days of frontdesk duties.

Other stuff

This month I uploaded new upstream versions of:

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

Krebs on SecurityMicrosoft: Attackers Exploiting Windows Zero-Day Flaw

Microsoft Corp. warns that attackers are exploiting a previously unknown vulnerability in Windows 10 and many Windows Server versions to seize control over PCs when users open a malicious document or visit a booby-trapped website. There is currently no official patch for the flaw, but Microsoft has released recommendations for mitigating the threat.

According to a security advisory from Redmond, the security hole CVE-2021-40444 affects the “MSHTML” component of Internet Explorer (IE) on Windows 10 and many Windows Server versions. IE been slowly abandoned for more recent Windows browsers like Edge, but the same vulnerable component also is used by Microsoft Office applications for rendering web-based content.

“An attacker could craft a malicious ActiveX control to be used by a Microsoft Office document that hosts the browser rendering engine,” Microsoft wrote. “The attacker would then have to convince the user to open the malicious document. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.”

Microsoft has not yet released a patch for CVE-2021-40444, but says users can mitigate the threat from this flaw by disabling the installation of all ActiveX controls in IE. Microsoft says the vulnerability is currently being used in targeted attacks, although its advisory credits three different entities with reporting the flaw.

On of the researchers credited — EXPMONsaid on Twitter that it had reproduced the attack on the latest Office 2019 / Office 365 on Windows 10.

“The exploit uses logical flaws so the exploitation is perfectly reliable (& dangerous),” EXPMON tweeted.

Windows users could see an official fix for the bug as soon as September 14, when Microsoft is slated to release its monthly “Patch Tuesday” bundle of security updates.

This year has been a tough one for Windows users and so-called “zero day” threats, which refers to vulnerabilities that are not patched by current versions of the software in question, and are being actively exploited to break into vulnerable computers.

Virtually every month in 2021 so far, Microsoft has been forced to respond to zero-day threats targeting huge swaths of its user base. In fact, by my count May was the only month so far this year that Microsoft didn’t release a patch to fix at least one zero-day attack in Windows or supported software.

Many of those zero-days involve older Microsoft technologies or those that have been retired, like IE11; Microsoft officially retired support for Microsoft Office 365 apps and services on IE11 last month. In July, Microsoft rushed out a fix for the Print Nightmare vulnerability that was present in every supported version of Windows, only to see the patch cause problems for a number of Windows users.

On June’s Patch Tuesday, Microsoft addressed six zero-day security holes. And of course in March, hundreds of thousands of organizations running Microsoft Exchange email servers found those systems compromised with backdoors thanks to four zero-day flaws in Exchange.

Planet DebianIan Jackson: Wanted: Rust sync web framework

tl;dr: Please recommend me a high-level Rust server-side web framework which is sync and does not plan to move to an async api.


Async Rust gives somewhat higher performance. But it is considerably less convenient and ergonomic than using threads for concurrency. Much work is ongoing to improve matters, but I doubt async Rust will ever be as easy to write as sync Rust.

"Even" sync multithreaded Rust is very fast (and light on memory use) compared to many things people write web apps in. The vast majority of web applications do not need the additional performance (which is typically a percentage, not a factor).

So it is rather disappointing to find that all the review articles I read, and all the web framework authors, seem to have assumed that async is the inevitable future. There should be room for both sync and async. Please let universal use of async not be the inevitable future!


I would like a web framework that provides a sync API (something like Rocket 0.4's API would be ideal) and will remain sync. It should probably use (async) hyper underneath.

So far I have not found one single web framework on that neither is already async nor suggests that its authors intend to move to an async API. Some review articles I found even excluded sync frameworks entirely!

Answers in the comments please :-).

comment count unavailable comments

Worse Than FailureCodeSOD: Columns of a Constant Length

Today's submitter goes by "[object Object]", which I appreciate the JavaScript gag even when they send us C code.

This particular bit of C code exists to help output data in fixed-width files. What you run into, with fixed-width files, is that it becomes very important to properly pad all your columns. It's not a difficult programming challenge, but it's also easy to make mistakes that cause hard-to-spot bugs. And given that most systems using fixed-width files are legacy systems with their own idiosyncrasies, things could easily go wrong.

Things go more wrong when you don't actually know the right way to pad strings. Thus this excerpt from constants.h.

#define LABEL_SIZE1 1 #define LABEL_SIZE2 2 #define LABEL_SIZE3 3 #define LABEL_SIZE4 4 #define LABEL_SIZE5 5 #define LABEL_SIZE6 6 #define LABEL_SIZE7 7 #define LABEL_SIZE8 8 #define LABEL_SIZE9 9 #define LABEL_SIZE10 10 #define LABEL_SIZE11 11 #define LABEL_SIZE12 12 #define LABEL_SIZE14 14 #define LABEL_SIZE15 15 #define LABEL_SIZE16 16 #define LABEL_SIZE21 21 #define LABEL_SIZE20 20 #define LABEL_SIZE23 23 #define LABEL_SIZE24 24 #define LABEL_SIZE25 25 #define LABEL_SIZE30 30 #define LABEL_SIZE31 31 #define LABEL_SIZE32 32 #define LABEL_SIZE33 33 #define LABEL_SIZE37 37 #define LABEL_SIZE39 39 #define LABEL_SIZE40 40 #define LABEL_SIZE43 43 #define LABEL_SIZE45 45 #define LABEL_SIZE48 48 #define LABEL_SIZE50 50 #define LABEL_SIZE53 53 #define LABEL_SIZE73 73 #define LABEL_SIZE80 80 #define LABEL_SIZE121 121 #define LABEL_SIZE100 100 #define LABEL_SIZE149 149 #define LABEL_SIZE150 150 #define LABEL_SIZE170 170 #define LABEL_SIZE175 175 #define LABEL_SIZE205 205 #define LABEL_SIZE208 208 #define LABEL_SIZE723 723 #define LABEL_SIZE725 725 #define LABEL_SIZE753 753 #define LABEL_SIZE825 825 #define LABEL_SIZE853 853 #define SPACE_1 " " #define SPACE_2 " " #define SPACE_3 " " #define SPACE_4 " " #define SPACE_5 " " #define SPACE_8 " " #define SPACE_10 " " #define SPACE_11 " " #define SPACE_12 " " #define SPACE_13 " " #define SPACE_14 " " #define SPACE_15 " " #define SPACE_19 " " #define SPACE_20 " " #define SPACE_21 " " #define SPACE_23 " " #define SPACE_25 " " #define SPACE_29 " " #define SPACE_30 " " #define SPACE_31 " " #define SPACE_32 " " #define SPACE_37 " " #define SPACE_39 " " #define SPACE_41 " " #define SPACE_53 " " #define SPACE_57 " " #define ZERO_1 "0" #define ZERO_2 "00" #define ZERO_3 "000" #define ZERO_4 "0000" #define ZERO_5 "00000" #define ZERO_8 "00000000" #define ZERO_10 "0000000000" #define ZERO_11 "00000000000" #define ZERO_14 "00000000000000" #define ZERO_29 "00000000000000000000000000000" #define NINE_11 "99999999999" #define NINE_29 "99999999999999999999999999999" // Min max values #define MIN_MINUS1 -1 #define MIN_0 0 #define MIN_1 1 #define MIN_111 111 #define MAX_0 0 #define MAX_1 1 #define MAX_2 2 #define MAX_3 3 #define MAX_4 4 #define MAX_7 7 #define MAX_9 9 #define MAX_12 12 #define MAX_15 15 #define MAX_63 63 #define MAX_99 99 #define MAX_114 114 #define MAX_127 127 #define MAX_255 255 #define MAX_256 256 #define MAX_999 999 #define MAX_1023 1023 #define MAX_4095 4095 #define MAX_9999 9999 #define MAX_16383 16383 #define MAX_65535 65535 #define MAX_9_5 99999 #define MAX_9_6 999999 #define MAX_9_7 9999999 #define MAX_9_8 99999999 #define MAX_9_9 999999999 #define MAX_9_10 9999999999 #define MAX_9_11 99999999999

The full file is 500 lines long, all in this pattern. It's also delightfully meta: these constants are used for generating fixed-width files, and this file is itself a fixed-width file, thanks to the developer wearing their tab key out.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram More Detail on the Juniper Hack and the NSA PRNG Backdoor

We knew the basics of this story, but it’s good to have more detail.

Here’s me in 2015 about this Juniper hack. Here’s me in 2007 on the NSA backdoor.

Planet DebianJoachim Breitner: A Candid explainer: Opt is special

This is the third post in a series about the interface description language Candid.

The record extension problem

Initially, the upgrade rules of Candid were a straight-forward application of the canonical subtyping rules. This worked and was sound, but it forbid one very commonly requested use case: Extending records in argument position.

The subtyping rule for records say that

   record { old_field : nat; new_field : int }
<: record { old_field : nat }

or, in words, that subtypes can have more field than supertypes. This makes intuitive sense: A user of a such a record value that only expects old_field to be there is not bothered if suddenly a new_field appears. But a user who expects new_field to be there is thrown off if it suddenly isn’t anymore. Therefore it is ok to extend the records returned by a service with new fields, but not to extend the record in your methods’s argument types.

But users want to extend their record types over time, also in argument position!

In fact, they often want to extend them in both positions. Consider a service with the following interface, where the CUser record appears both in argument and result position:

type CUser = record { login : text; name : text };
service C : {
  register_user : (CUser) -> ();
  get_user_data : (text) -> (CUser);

It seems quite natural to want to extend the record with a new field (say, the age of the user). But simply changing the definition of CUser to

type CUser = record { login : text; name : text; age : nat }

is not ok, because now register_user requires the age field, but old clients don’t provide it.

So how can we allow developers to make changes like this (because they really really want that), while keeping the soundness guarantees made by Candid (because we really really want that)? This question has bothered us for over two years, and we even created a meta issue that records the dozen approached we considered, discussed and eventually ditched.

Dynamic subtyping checks in opt

I will spare you the history lesson, though, and explain the solution we eventually came up with.

In the example above it seems unreasonable to let the developer add a field age of type nat. Since there may be old clients around, the service unavoidably has to deal with records that don’t have an age field. If the code expects an age value, what should it do in that case?

But one can argue that changing the record type to

type CUser = record { login : text; name : text; age : opt nat }

could work: If the age field is present, use the value. If the field is absent, inject a null value during decoding.

In the first-order case, this rather obvious solution would work just fine, and we’d be done. But Candid aims to solve the higher order case, and I said things get tricky here, didn’t I?

Consider another, independent service D with the following interface:

type DUser = record { login : text; name : text };
service D : {
  on_user_added : (func (DUser) -> ()) -> ()

This service has a method that takes a method reference, presumably with the intention of calling it whenever a new user was added to service D. And because the types line up nicely, we can compose these two services, by once calling D.on_user_added(C.register_user), so that from now on D calls C.register_user(…). These kind of service mesh-ups are central to the vision of the Internet Computer!

But what happens if the services now evolve their types in different ways, e.g. towards

type CUser = record { login : text; name : text; age : opt nat }


type DUser = record { login : text; name : text; age : opt variant { under_age; adult }}

Individually, these are type changes that we want to allow. But now the call from D to C may transmit a value of record { …; age = opt variant { under_age } } when C expects a natural number! This is precisely what we want to prevent by introducing types, and it seems we have lost soundness.

The best solution we could find is to make the opt type somewhat special, and apply these extra rules when decoding at an expected type of opt t.

  • If this was a record field, and the provided record value doesn’t even have a field of that name, don’t fail but instead treat this as null. This handles the first-order example above.

  • If there is a value, look at its type (which, in Candid, is part of the message).

    • If it is opt t' and t' <: t, then decode as normal.

    • If it is opt t' but t' <: t does not hold, then treat that as null.

      This should only happen in these relatively obscure higher-order cases where services evolved differently and incompatibly, and makes sure that the calls that worked before the upgrades will continue to work afterwards.

      It is not advisable to actually use that rule when changing your service’s interface. Tools that assist the developer with an upgrade check should prevent or warn about the use of this rule.

  • Not strictly required here, but since we are making opt special anyways:

    If its type t' is not of the form opt …, pretend it was opt t', and also pretend that the given value was wrapped in opt.

    This allows services to make record field in arguments that were required in the old version optional in a new version, e.g. as a way to deprecated them. So it is mildly useful, although I can report that it makes the meta-theory and implementation rather complex, in particular together with equirecursion and beasts like type O = opt O. See this discussion for a glimpse of that.

In the above I stress that we look at the type of the provided value, and not just the value. For example, if the sender sends the value opt vec {} at type opt vec nat, and the recipient expects opt vec bool, then this will decode as null, even though one could argue that the value opt vec {} could easily be understood at type opt vec bool. We initially had that, but then noticed that this still breaks soundness when there are references inside, and we have to do a full subtyping check in the decoder. This is very unfortunate, because it makes writing a Candid decoder a noticeably harder task that requires complicated graph algorithms with memoization (which I must admit Motoko has not implemented yet), but it’s the least bad solution we could find so far.

Isn’t that just dynamic typing?

You might have noticed that with these rules, t <: opt t' holds for all types t and t'. In other words, every opt … is a top type (like reserved), thus all optional types are equivalent. One could argue that we are throwing all of the benefits of static typing over board this way. But it’s not quite as bad: It’s true that decoding Candid values now involves a dynamic check inserting null values in certain situations, but as long as everyone plays by the rules (i.e. upgrades their services according to the Candid safe upgrade rules, and heeds the warning alluded to above), these certain situations will only happen in the obscurest of cases.

It is, by the way, not material to this solution that the special subtyping behavior is applied to “the” opt type, and a neighboring point in the design space would be a dedicated upgraded t type constructor. That would allow developers to use the canonical opt type with the usual subtyping rules and communicate their intent (“clients should consider this field optional” vs. “old clients don’t know about this field, but new clients should use it”) more cleanly – at the expense of having more “non-canonical” types in the type system.

To see why it is convenient if Candid has mostly just the “normal” types, read the next post, which will describe how Candid can be integrated into a host language.

Planet DebianMartin-Éric Racine: sudo apt-get update && sudo apt-get dist-upgrade

Debian 11 (codename Bullseye) was recently released. This was the smoothest upgrade I've experienced in some 20 years as a Debian user. In my haste, I completely forgot to first upgrade dpkg and apt, doing a straight dist-upgrade. Nonetheless, everything worked out of the box. No unresolved dependency cycles. Via my last-mile Gigabit connection, it took about 5 minutes to upgrade and reboot. Congratulations to everyone who made this possible!

Since the upgrade, only a handful of bugs were found. I filed bug reports. Over these past few days, maintainers have started responding. In once particular case, my report exposed a CVE caused by copy-pasted code between two similar packages. The source package fixed their code to something more secure a few years ago, while the destination package missed it. The situation has been brought to Debian's security team's attention and should be fixed over the next few days.


Having recently experienced hard-disk problems on my main desktop, upgrading to Bullseye made me revisit a few issues. One of these was the possibility of transiting to BTRFS. Last time I investigated the possibility was back when Ubuntu briefly switched their default filesystem to BRTFS. Back then, my feeling was that BRTFS wasn't ready for mainstream. For instance, the utility to convert an EXT2/3/4 partition to BTRFS corrupted the end of the partition. No thanks. However, in recent years, many large-scale online services have migrated to BRTFS and seem to be extremely happy with the result. Additionally, Linux kernel 5 added useful features such as background defragmentation. This got me pondering whether now would be a good time to migrate to BRTFS. Sadly it seems that the stock kernel shipping with Bullseye doesn't have any of these advanced features enabled in its configuration. Oh well.


The only point that has become problematic is my Geode hosts. For one things, upstream Rust maintainers have decided to ignore the fact that i686 is a specification and arbitrarily added compiler flags for more recent x86-32 CPUs to their i686 target. While Debian Rust maintainers have purposely downgraded the target, RustC still produces binaries that the Geode LX (essentially an i686 without PAE) cannot process. This affects fairly basic packages such as librsvg, which breaks SVG image support for a number of dependencies. Additionally, there's been persistent problems with systemd crashing on my Geode hosts whenever daemon-reload is issued. Then, a few days ago, problems started occurring with C++ binaries, because GCC-11 upstream enabled flags for more recent CPUs in their default i686 target. While I realize that SSE and similar recent CPU features produce better binaries, I cannot help but feel that treating CPU targets as anything else than a specification is a mistake. i686 is a specification. It is not a generic equivalent to x86-32.

Planet DebianRussell Coker: Oracle Cloud Free Tier

It seems that every cloud service of note has a free tier nowadays and the Oracle Cloud is the latest that I’ve discovered (thanks to r/homelab which I highly recommend reading). Here’s Oracle’s summary of what they offer for free [1].

Oracle’s “always free” tier (where presumable “always” is defined as “until we change our contract”) currently offers ARM64 VMs to a total capacity of 4 CPU cores, 24G of RAM, and 200G of storage with a default VM size of 1/4 that (1 CPU core and 6G of RAM). It also includes 2 AMD64 VMs that each have 1G of RAM, but a 64bit VM with 1G of RAM isn’t that useful nowadays.

Web Interface

The first thing to note is that the management interface is a massive pain to use. When a login times out for security reasons it redirects to a web page that gives a 404 error, maybe the redirection works OK if you are using it when it times out, but if you go off and spend an hour doing something else you will return to a 404 page. A web interface should never refer you to a page with a 404.

There doesn’t seem to be a way of bookmarking the commonly used links (as AWS does) and the set of links on the left depend on the section you are in with no obvious way of going between sections. Sometimes I got stuck in a set of pages about authentication controls (the “identity cloud”) and there seems to be no link I could click on to get me back to cloud computing, I had to go to a bookmarked link for the main cloud login page. A web interface should never force the user to type in the main URL or go to a bookmark, you should be able to navigate from every page to every other page in a logical manner. An advanced user might have their own bookmarks in their browser to suit their workflow. But a beginner should be able to go to anywhere without breaking the session.

Some parts of the interface appear to be copied from AWS, but unfortunately not the good parts. The way AWS manages IP access control is not easy to manage and it’s not clear why packets are dropped, Oracle copies all this. On the upside Oracle has some good Datadog style analytics so for a new deployment you can debug IP access control by seeing records of rejected packets. Just to make it extra annoying when you create a rule with multiple ports specified the web interface will expand it out to multiple rules for one port each, having ports 80 and 443 on separate lines doesn’t make things easier. Also it forces you to have IPv4 and IPv6 as separate rules, so if you want HTTP and HTTPS on both IPv4 and IPv6 (a common requirement) then you need 4 separate rules.

One final annoying thing is that the web interface doesn’t make your previous settings a default. As I’ve created many ARM images and haven’t created a single AMD image it should know that the probability that I want to create an AMD image is very low and stop defaulting to that.


When trying a new system you will inevitably break things and have to recover things. The way to recover from a configuration error that prevents your VM from booting and getting to a state of allowing a login is to go to stop the VM, then go to the “Boot volume” section under “Resources” and use the settings button to detach the boot volume. Then you go to another VM (which must be running), go to the “Attached block volumes” menu and attach it as Paravirtualised (not iSCSI and not default which will probably be iSCSI). After some time the block device will appear and you can mount it and do stuff to it. Then after umounting it you detach it from the recovery VM and attach it again to the original VM (where it will still have an entry in the “Boot volume” section) and boot the original VM.

As an aside it’s really annoying that you can’t attach a volume to a VM that isn’t running.

My first attempt at image recovery started with making a snapshot of the Boot volume, this didn’t work well because the image uses EFI and therefore GPT and because the snapshot was larger than the original block device (which incidentally was the default size). I admit that I might have made a mistake making the snapshot, but if so it shouldn’t be so easy to do. With GPT if you have a larger block device then partitioning tools complain about the backup partition table not being found, and they complain even more if you try to go back to the smaller size later on. Generally GPT partition tables are a bad idea for VMs, when I run the host I don’t use partition tables, I have a separate block device for each filesystem or swap space.

Snapshots aren’t needed for recovery, they don’t seem to work very well, and if it’s possible to attach a snapshot to a VM in place of it’s original “Boot volume” I haven’t figured out how to do it.

Console Connection

If you boot Oracle Linux a derivative of RHEL that has SE Linux enabled in enforcing mode (yay) then you can go to the “Console connection”. The console is a Javascript console which allows you to login on a virtual serial console on device /dev/ttyAMA0. It tells you to type “help” but that isn’t accepted, you have a straight Linux console login prompt.

If you boot Ubuntu then you don’t get a working serial console, it tells you to type “help” for help but doesn’t respond to that.

It seems that the Oracle Linux kernel 5.4.17-2102.204.4.4.el7uek.aarch64 is compiled with support for /dev/ttyAMA0 (the default ARM serial device) while the kernel 5.11.0-1016-oracle compiled by Oracle for their Ubuntu VMs doesn’t have it.


I haven’t done any detailed tests of VM performance. As a quick test I used zstd to compress a 154MB file, on my home workstation (E5-2620 v4 @ 2.10GHz) it took 11.3 seconds of CPU time to compress with zstd -9 and 7.2s to decompress. On the Oracle cloud it took 7.2s and 5.4s. So it seems that for some single core operations the ARM CPU used by the Oracle cloud is about 30% to 50% faster than a E5-2620 v4 (a slightly out of date server processor that uses DDR4 RAM).

If you ran all the free resources in a single VM that would make a respectable build server. If you want to contribute to free software development and only have a laptop with 4G of RAM then an ARM build/test server with 24G of RAM and 4 cores would be very useful.

Ubuntu Configuration

The advantage of using EFI is that you can manage the kernel from within the VM. The default Oracle kernel for Ubuntu has a lot of modules included and is compiled with a lot of security options including SE Linux.


AWS offers 750 hours (just over 31 days) per month of free usage of a t2.micro or t3.micro EC2 instance (which means 1GB of RAM). But that only lasts for 12 months and it’s still only 1GB of RAM. AWS has some other things that could be useful like 1 million free Lambda requests per month. If you want to run your personal web site on Lambda you shouldn’t hit that limit. They also apparently have some good offers for students.

The Google Cloud Project (GCP) offers $300 of credit.

GCP also has ongoing free tier usage for some services. Some of them are pretty much unlimited use (50GB of storage for “Cloud Source Repositories” is a heap of source code). But for VMs you get the equivalent of 1*e2-micro instance running 24*7. A e2-micro has 1G of RAM. You also only get 30G of storage and 1GB of outbound data. It’s clearly not as generous an offer as Oracle, but Oracle is the underdog so they have to try harder.

Azure appears to be much the same as AWS, free Linux VM for a year and then other less popular services free forever (or until they change the contract).

The IBM cloud free tier is the least generous offer, a VM is only free for 30 days. But what they offer for 30 days is pretty decent. If you want to try the IBM cloud and see if it can do what your company needs then this will do well. If you want to have free hosting for your hobby stuff then it’s no good.

Oracle seems like the most generous offer if you want to do stuff, but also one of the least valuable if you want to learn things that will help you at a job interview. For job interviews AWS seems the most useful and then GCP and Azure vying for second place.

Worse Than FailureCodeSOD: Insertion

Shalonda inherited a C# GUI application that, if we're being charitable, has problems. It's slow, it's buggy, it crashes a lot. The users hate it, the developers hate it, but it's also one of the core applications that drives their business, so everyone needs to use it.

One thing the application needs to do is manage a list of icons. Each icon is an image, and based on user actions, a new icon might get inserted in the middle of the list. This is how that happens:

/// <summary> /// Inserts an object (should be an Image) at the specified index /// </summary> /// <param name="index">the index where to add the object at</param> /// <param name="val">the object (an Image) to be added to this ImageCollection</param> public void Insert(int index, object val) { object[] icons = new object[this.list.Count]; object[] newicons = new object[this.list.Count + 1]; for (int i = 0; i < newicons.Length; i++) { if (i != index) newicons[i] = icons[i]; else newicons[i] = val; } this.list.Clear(); this.list.AddRange(newicons); }

A comment like "Inserts an object (should be an Image)", in a strongly typed language, is always a bad sign. And sure enough, the signature of this method continues the bad signs: object val. Insert, in this case, is not a method defined in an interface, there's no reason for it to be val here other than the fact that the original developer probably had a snippet for handling inserting into lists, and just dropped it in.

Then we create two new arrays, both of which are empty. Then we iterate across those empty arrays, copying emptiness from the first one into the second one, except when we're at the index supplied by the caller, in which case we put our input value into the array.

Fortunately, at this point, our program is guaranteed to throw an exception, since icons has one fewer index than newicons, and they're both being indexed by i, which goes up to newicons.Length. This is actually the best case for this block of code, because of what comes next: it empties the list and then inserts the newicons (which is all nulls except for one position) into the list.

List, in this case, is an ArrayList, which definitely doesn't already have an insert method which actually does what it's supposed to.

Shalonda replaced this method with a call to the built-in Insert. This stopped the exceptions from being thrown, but that in turn uncovered a whole new set of bugs that no one had seen yet. As the song goes: "99 little bugs in the tracker, 99 little bugs. Take one down, patch it around, 175 bugs in the tracker!"

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Planet DebianJelmer Vernooij: Web Hooks for the Janitor

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As covered in my post from last week, the Janitor now regularly tries to import new upstream git snapshots or upstream releases into packages in Sid.

Moving parts

There are about 30,000 packages in sid, and it usually takes a couple of weeks for the janitor to cycle through all of them. Generally speaking, there are up to three moving targets for each package:

  • The packaging repository; vcswatch regularly scans this for changes, and notifies the janitor when a repository has changed. For salsa repositories it is instantly notified through a web hook
  • The upstream release tarballs; the QA watch service regularly polls these, and the janitor scans for changes in the UDD tables with watch data (used for fresh-releases)
  • The upstream repository; there is no service in Debian that watches this at the moment (used for fresh-snapshots)

When the janitor notices that one of these three targets has changed, it prioritizes processing of a package - this means that a push to a packaging repository on salsa usually leads to a build being kicked off within 10 minutes. New upstream releases are usually noticed by QA watch within a day or so and then lead to a build. Now commits in upstream repositories don’t get noticed today.

Note that there are no guarantees; the scheduler tries to be clever and not e.g. rebuild the same package over and over again if it’s constantly changing and takes a long time to build.

Packages without priority are processed with a scoring system that takes into account perceived value (based on e.g. popcon), cost (based on wall-time duration of previous builds) and likelihood of success (whether recent builds were successful, and how frequently the repositories involved change).

webhooks for upstream repositories

At the moment there is no service in Debian (yet - perhaps this is something that vcswatch or a sibling service could also do?) that scans upstream repositories for changes.

However, if you maintain an upstream package, you can use a webhook to notify the janitor that commits have been made to your repository, and it will create a new package in fresh-snapshots. Webhooks from the following hosting site software are currently supported:

You can simply use the URL as the target for hooks. There is no need to specify a secret, and the hook can either use a JSON or form encoding payload.

The endpoint should tell you whether it understood a webhook request, and whether it took any action. It’s fine to submit webhooks for repositories that the janitor does not (yet) know about.


For GitHub, you can do so in the Webhooks section of the Settings tab. Fill the form as shown below and click on Add webhook:


On GitLab instances, you can find the Webhooks tab under the Settings menu for each repository (under the gear symbol). Fill the form in as shown below and click Add Webhook:


For Launchpad, go to the repository (for Git) web view and click Manage Webhooks. From there, you can add a new webhook; fill the form in as shown below and click Add Webhook:

Krebs on Security“FudCo” Spam Empire Tied to Pakistani Software Firm

In May 2015, KrebsOnSecurity briefly profiledThe Manipulaters,” the name chosen by a prolific cybercrime group based in Pakistan that was very publicly selling spam tools and a range of services for crafting, hosting and deploying malicious email. Six years later, a review of the social media postings from this group shows they are prospering, while rather poorly hiding their activities behind a software development firm in Lahore that has secretly enabled an entire generation of spammers and scammers.

The Web site in 2015 for the “Manipulaters Team,” a group of Pakistani hackers behind the dark web identity “Saim Raza,” who sells spam and malware tools and services.

The Manipulaters’ core brand in the underground is a shared cybercriminal identity named “Saim Raza,” who for the past decade across dozens of cybercrime sites and forums has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” etc.

The common acronym in nearly all of Saim Raza’s domains over the years — “FUD” — stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.

One of several current Fudtools sites run by The Manipulaters.

The current website for Saim Raza’s Fud Tools (above) offers phishing templates or “scam pages” for a variety of popular online sites like Office365 and Dropbox. They also sell “Doc Exploit” products that bundle malicious software with innocuous Microsoft Office documents; “scampage hosting” for phishing sites; a variety of spam blasting tools like HeartSender; and software designed to help spammers route their malicious email through compromised sites, accounts and services in the cloud.

For years leading up to 2015, “” was the name on the registration records for thousands of scam domains that spoofed some of the world’s top banks and brand names, but particularly Apple and Microsoft. When confronted about this, The Manipulaters founder Madih-ullah Riaz replied, “We do not deliberately host or allow any phishing or any other abusive website. Regarding phishing, whenever we receive complaint, we remove the services immediately. Also we are running business since 2006.”

The IT network of The Manipulaters, circa 2013. Image: Facebook

Two years later, KrebsOnSecurity received an email from Riaz asking to have his name and that of his business partner removed from the 2015 story, saying it had hurt his company’s ability to maintain stable hosting for their stable of domains.

“We run web hosting business and due to your post we got very serious problems especially no data center was accepting us,” Riaz wrote in a May 2017 email. “I can see you post on hard time criminals we are not criminals, at least it was not in our knowledge.”

Riaz said the problem was his company’s billing system erroneously used The Manipulators’ name and contact information instead of its clients in WHOIS registration records. That oversight, he said, caused many researchers to erroneously attribute to them activity that was coming from just a few bad customers.

“We work hard to earn money and it is my request, 2 years of my name in your wonderful article is enough punishment and we learned from our mistakes,” he concluded.

The Manipulaters have indeed learned a few new tricks, but keeping their underground operations air-gapped from their real-life identities is mercifully not one of them.


Phishing domain names registered to The Manipulaters included an address in Karachi, with the phone number 923218912562. That same phone number is shared in the WHOIS records for 4,000+ domains registered through domainprovider[.]work, a domain controlled by The Manipulaters that appears to be a reseller of another domain name provider.

One of Saim Raza’s many ads in the cybercrime underground for his Fudtools service promotes the domain fudpage[.]com, and the WHOIS records for that domain share the same Karachi phone number. Fudpage’s WHOIS records list the contact as “,” which is another email address used by The Manipulaters to register domains.

As I noted in 2015, The Manipulaters Team used domain name service (DNS) settings from another blatantly fraudulent service called ‘FreshSpamTools[.]eu,’ which was offered by a fellow Pakistani who also conveniently sold phishing toolkits targeting a number of big banks.

The WHOIS records for FreshSpamTools briefly list the email address, which corresponds to the email address for a Facebook account of a Bilal “Sunny” Ahmad Warraich (a.k.a. Bilal Waddaich).

Bilal Waddaich’s current Facebook profile photo includes many current and former employees of We Code Solutions.

Warraich’s Facebook profile says he works as an IT support specialist at a software development company in Lahore called We Code Solutions.

The We Code Solutions website.

A review of the hosting records for the company’s website wecodesolutions[.]pk show that over the past three years it has shared a server with just a handful of other domains, including:



The profile image atop Warraich’s Facebook page is a group photo of current and former We Code Solutions employees. Helpfully, many of the faces in that photo have been tagged and associated with their respective Facebook profiles.

For example, the Facebook profile of Burhan Ul Haq, a.k.a. “Burhan Shaxx” says he works in human relations and IT support for We Code Solutions. Scanning through Ul Haq’s endless selfies on Facebook, it’s impossible to ignore a series of photos featuring various birthday cakes and the words “Fud Co” written in icing on top.

Burhan Ul Haq’s photos show many Fud Co-themed cakes the We Code Solutions employees enjoyed on the anniversary of the Manipulaters Team.

Yes, from a review of the Facebook postings of We Code Solutions employees, it appears that for at least the last five years this group has celebrated an anniversary every May with a Fud Co cake, non-alcoholic sparkling wine, and a Fud Co party or group dinner. Let’s take a closer look at that delicious cake:

The head of We Code Solutions appears to be a guy named Rameez Shahzad, the older individual at the center of the group photo in Warraich’s Facebook profile. You can tell Shahzad is the boss because he is at the center of virtually every group photo he and other We Code Solutions employees posted to their respective Facebook pages.

We Code Solutions boss Rameez Shahzad (in sunglasses) is in the center of this group photo, which was posted by employee Burhan Ul Haq, pictured just to the right of Shahzad.

Shahzad’s postings on Facebook are even more revelatory: On Aug. 3, 2018, he posted a screenshot of someone logged into a website under the username Saim Raza — the same identity that’s been pimping Fud Co spam tools for close to a decade now.

“After [a] long time, Mailwizz ready,” Shahzad wrote as a caption to the photo:

We Code Solutions boss Rameez Shahzad posted on Facebook a screenshot of someone logged into a WordPress site with the username Saim Raza, the same cybercriminal identity that has peddled the FudTools spam empire for more than 10 years.

Whoever controlled the Saim Raza cybercriminal identity had a penchant for re-using the same password (“lovertears”) across dozens of Saim Raza email addresses. One of Saim Raza’s favorite email address variations was “game.changer@[pick ISP here]”. Another email address advertised by Saim Raza was “”

So it was not surprising to see Rameez Shahzad post a screenshot to his Facebook account of his computer desktop, which shows he is logged into a Skype account that begins with the name “game.” and a Gmail account beginning with “bluebtc.”

Image: Scylla Intel

KrebsOnSecurity attempted to reach We Code Solutions via the contact email address on its website — info@wecodesolutions[.]pk — but the message bounced back, saying there was no such address. Similarly, a call to the Lahore phone number listed on the website produced an automated message saying the number is not in service. None of the We Code Solutions employees contacted directly via email or phone responded to requests for comment.


This open-source research on The Manipulaters and We Code Solutions is damning enough. But the real icing on the Fud Co cake is that sometime in 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations.

That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that specializes in connecting cybercriminals to their real-life identities. Whoops.

Scylla co-founder Sasha Angus said the messages that flooded their inbox once they set up an email server on that domain quickly filled in many of the details they didn’t already have about The Manipulaters.

“We know the principals, their actual identities, where they are, where they hang out,” Angus said. “I’d say we have several thousand exhibits that we could put into evidence potentially. We have them six ways to Sunday as being the guys behind this Saim Raza spammer identity on the forums.”

Angus said he and a fellow researcher briefed U.S. prosecutors in 2019 about their findings on The Manipulaters, and that investigators expressed interest but also seemed overwhelmed by the volume of evidence that would need to be collected and preserved about this group’s activities.

“I think one of the things the investigators found challenging about this case was not who did what, but just how much bad stuff they’ve done over the years,” Angus said. “With these guys, you keep going down this rabbit hole that never ends because there’s always more, and it’s fairly astonishing. They are prolific. If they had halfway decent operational security, they could have been really successful. But thankfully, they don’t.”

Planet DebianVincent Bernat: Switching to the i3 window manager

I have been using the awesome window manager for 10 years. It is a tiling window manager, configurable and extendable with the Lua language. Using a general-purpose programming language to configure every aspect is a double-edged sword. Due to laziness and the apparent difficulty of adapting my configuration—about 3000 lines—to newer releases, I was stuck with the 3.4 version, whose last release is from 2013.

It was time for a rewrite. Instead, I have switched to the i3 window manager, lured by the possibility to migrate to Wayland and Sway later with minimal pain. Using an embedded interpreter for configuration is not as important to me as it was in the past: it brings both complexity and brittleness.

i3 dual screen setup
Dual screen desktop running i3, Emacs, some terminals, including a Quake console, Firefox, Polybar as the status bar, and Dunst as the notification daemon.

The window manager is only one part of a desktop environment. There are several options for the other components. I am also introducing them in this post.

i3: the window manager​

i3 aims to be a minimal tiling window manager. Its documentation can be read from top to bottom in less than an hour. i3 organize windows in a tree. Each non-leaf node contains one or several windows and has an orientation and a layout. This information arbitrates the window positions. i3 features three layouts: split, stacking, and tabbed. They are demonstrated in the below screenshot:

Example of layouts
Demonstration of the layouts available in i3. The main container is split horizontally. The first child is split vertically. The second one is tabbed. The last one is stacking.
Tree representation of the previous screenshot
Tree representation of the previous screenshot.

Most of the other tiling window managers, including the awesome window manager, use predefined layouts. They usually feature a large area for the main window and another area divided among the remaining windows. These layouts can be tuned a bit, but you mostly stick to a couple of them. When a new window is added, the behavior is quite predictable. Moreover, you can cycle through the various windows without thinking too much as they are ordered.

i3 is more flexible with its ability to build any layout on the fly, it can feel quite overwhelming as you need to visualize the tree in your head. At first, it is not unusual to find yourself with a complex tree with many useless nested containers. Moreover, you have to navigate windows using directions. It takes some time to get used to.

I set up a split layout for Emacs and a few terminals, but most of the other workspaces are using a tabbed layout. I don’t use the stacking layout. You can find many scripts trying to emulate other tiling window managers but I did try to get my setup pristine of these tentatives and get a chance to familiarize myself. i3 can also save and restore layouts, which is quite a powerful feature.

My configuration is quite similar to the default one and has less than 200 lines.

i3 companion: the missing bits​

i3 philosophy is to keep a minimal core and let the user implements missing features using the IPC protocol:

Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible.

— Introduction to the i3 window manager

While this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab.

Instead of maintaining a script for each feature, I have centralized everything into a single Python process, i3-companion using asyncio and the i3ipc-python library. Each feature is self-contained into a function. It implements the following components:

make a workspace exclusive to an application
When a workspace contains Emacs or Firefox, I would like other applications to move to another workspace, except for the terminal which is allowed to “intrude� into any workspace. The workspace_exclusive() function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
implement a Quake console
The quake_console() function implements a drop-down console available from any workspace. It can be toggled with Mod+`. This is implemented as a scratchpad window.
back and forth workspace switching on the same output
With the workspace back_and_forth command, we can ask i3 to switch to the previous workspace. However, this feature is not restricted to the current output. I prefer to have one keybinding to switch to the workspace on the next output and one keybinding to switch to the previous workspace on the same output. This behavior is implemented in the previous_workspace() function by keeping a per-output history of the focused workspaces.
create a new empty workspace or move a window to an empty workspace
To create a new empty workspace or move a window to an empty workspace, you have to locate a free slot and use workspace number 4 or move container to workspace number 4. The new_workspace() function finds a free number and use it as the target workspace.
restart some services on output change
When adding or removing an output, some actions need to be executed: refresh the wallpaper, restart some components unable to adapt their configuration on their own, etc. i3 triggers an event for this purpose. The output_update() function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.

I will detail the other features as this post goes on. On the technical side, each function is decorated with the events it should react to:

@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS)
async def previous_workspace(i3, event):
    """Go to previous workspace on the same output."""

The CommandEvent() event class is my way to send a command to the companion, using either i3-msg -t send_tick or binding a key to a nop command. The latter is used to avoid spawning a shell and a i3-msg process just to send a message. The companion listens to binding events and checks if this is a nop command.

bindsym $mod+Tab nop "previous-workspace"

There are other decorators to avoid code duplication: @debounce() to coalesce multiple consecutive calls, @static() to define a static variable, and @retry() to retry a function on failure. The whole script is a bit more than 1000 lines. I think this is worth a read as I am quite happy with the result. 🦚

dunst: the notification daemon​

Unlike the awesome window manager, i3 does not come with a built-in notification system. Dunst is a lightweight notification daemon. I am running a modified version with HiDPI support for X11 and recursive icon lookup. The i3 companion has a helper function, notify(), to send notifications using DBus. container_info() and workspace_info() uses it to display information about the container or the tree for a workspace.

Notification showing i3 tree for a workspace
Notification showing i3’s tree for a workspace

polybar: the status bar​

i3 bundles i3bar, a versatile status bar, but I have opted for Polybar. A wrapper script runs one instance for each monitor.

The first module is the built-in support for i3 workspaces. To not have to remember which application is running in a workspace, the i3 companion renames workspaces to include an icon for each application. This is done in the workspace_rename() function. The icons are from the Font Awesome project. I maintain a mapping between applications and icons. This is a bit cumbersome but it looks great.

i3 workspaces in Polybar
i3 workspaces in Polybar

For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar’s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details.

Various modules for Polybar
Polybar displaying various information: CPU usage, memory usage, screen brightness, battery status, Bluetooth status (with a connected headset), network status (connected to a wireless network and to a VPN), notification status, and speaker volume.

For Bluetooh, network, and notification statuses, I am using Polybar’s ipc module: the next version of Polybar can receive an arbitrary text on an IPC socket. The module is defined with a single hook to be executed at the start to restore the latest status.

type = custom/ipc
hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null
initial = 1

It can be updated with polybar-msg action "#network.send.XXXX". In the i3 companion, the @polybar() decorator takes the string returned by a function and pushes the update through the IPC socket.

The i3 companion reacts to DBus signals to update the Bluetooth and network icons. The @on() decorator accepts a DBusSignal() object:

        onlyif=lambda args: (
            args[0] == "org.bluez.Device1"
            and "Connected" in args[1]
            or args[0] == "org.bluez.Adapter1"
            and "Powered" in args[1]
async def bluetooth_status(i3, event, *args):
    """Update bluetooth status for Polybar."""

The middle of the bar is occupied by the date and a weather forecast. The latest also uses the IPC mechanism, but the source is a Python script triggered by a timer.

Date and weather in Polybar
Current date and weather forecast for the day in Polybar. The data is retrieved with the OpenWeather API.

I don’t use the system tray integrated with Polybar. The embedded icons usually look horrible and they all behave differently. A few years back, Gnome has removed the system tray. Most of the problems are fixed by the DBus-based Status Notifier Item protocol—also known as Application Indicators or Ayatana Indicators for GNOME. However, Polybar does not support this protocol. In the i3 companion, The implementation of Bluetooth and network icons, including displaying notifications on change, takes about 200 lines. I got to learn a bit about how DBus works and I get exactly the info I want.

picom: the compositor​

I like having slightly transparent backgrounds for terminals and to reduce the opacity of unfocused windows. This requires a compositor.1 picom is a lightweight compositor. It works well for me, but it may need some tweaking depending on your graphic card.2 Unlike the awesome window manager, i3 does not handle transparency, so the compositor needs to decide by itself the opacity of each window. Check my configuration for details.

systemd: the service manager​

I use systemd to start i3 and the various services around it. My xsession script only sets some environment variables and lets systemd handles everything else. Have a look at this article from Michał Góral for the rationale. Notably, each component can be easily restarted and their logs are not mangled inside the ~/.xsession-errors file.3

I am using a two-stage setup: i3.service depends on to start services before i3:

Description=X session

Then, i3 executes the second stage by invoking the

Description=i3 session

Have a look on my configuration files for more details.

rofi: the application launcher​

Rofi is an application launcher. Its appearance can be customized through a CSS-like language and it comes with several themes. Have a look at my configuration for mine.

Rofi as an application launcher
Rofi as an application launcher

It can also act as a generic menu application. I have a script to control a media player and another one to select the wifi network. It is quite a flexible application.

Rofi as a wifi network selector
Rofi to select a wireless network

xss-lock and i3lock: the screen locker​

i3lock is a simple screen locker. xss-lock invokes it reliably on inactivity or before a system suspend. For inactivity, it uses the XScreenSaver events. The delay is configured using the xset s command. The locker can be invoked immediately with xset s activate. X11 applications know how to prevent the screen saver from running. I have also developed a small dimmer application that is executed 20 seconds before the locker to give me a chance to move the mouse if I am not away.4 Have a look at my configuration script.

Demonstration of xss-lock, xss-dimmer and i3lock with a 4× speedup.

The remaining components​

  • autorandr is a tool to detect the connected display, match them against a set of profiles, and configure them with xrandr.

  • inputplug executes a script for each new mouse and keyboard plugged. This is quite useful to load the appropriate the keyboard map. See my configuration.

  • xsettingsd provides settings to X11 applications, not unlike xrdb but it notifies applications for changes. The main use is to configure the Gtk and DPI settings. See my article on HiDPI support on Linux with X11.

  • Redshift adjusts the color temperature of the screen according to the time of day.

  • maim is a utility to take screenshots. I use Prt Scn to trigger a screenshot of a window or a specific area and Mod+Prt Scn to capture the whole desktop to a file. Check the helper script for details.

  • I have a collection of wallpapers I rotate every hour. A script selects them using advanced machine learning algorithms and stitches them together on multi-screen setups. The selected wallpaper is reused by i3lock.

  1. Apart from the eye candy, a compositor also helps to get tear-free video playbacks. ↩�

  2. My configuration works with both Haswell (2014) and Whiskey Lake (2018) Intel GPUs. It also works with AMD GPU based on the Polaris chipset (2017). ↩�

  3. You cannot manage two different displays this way—e.g. :0 and :1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons. ↩�

  4. I have only discovered later that XSecureLock ships such a dimmer with a similar implementation. But mine has a cool countdown! ↩�

Worse Than FailureClassic WTF: Crazy Like a Fox(Pro)

It's Labor Day in the US. We're busy partaking in traditional celebrations, which depending on who you ask, is either enjoying one of the last nice long weekends before winter, or throwing bricks at Pinkertons. So we dig back into the archives, for a classic story about databases. Original --Remy

“Database portability” is one of the key things that modern data access frameworks try and ensure for your application. If you’re using an RDBMS, the same data access layer can hopefully work across any RDBMS. Of course, since every RDBMS has its own slightly different idiom of SQL, and since you might depend on stored procedures, triggers, or views, you’re often tied to a specific database vendor, and sometimes a version.

Keulemans Chama fox.png

And really, for your enterprise applications, how often do you really change out your underlying database layer?

Well, for Eion Robb, it’s a pretty common occurrence. Their software, even their SaaS offering of it, allows their customers a great deal of flexibility in choosing a database. As a result, their PHP-based data access layer tries to abstract out the ugly details, they restrict themselves to a subset of SQL, and have a lot of late nights fighting through the surprising bugs.

The databases they support are the big ones- Oracle, SQL Server, MySQL, and FoxPro. Oh, there are others that Eion’s team supports, but it’s FoxPro that’s the big one. Visual FoxPro’s last version was released in 2004, and the last service pack it received was in 2007. Not many vendors support FoxPro, and that’s one of Eion’s company’s selling points to their customers.

The system worked, mostly. Until one day, when it absolutely didn’t. Their hosted SaaS offering crashed hard. So hard that the webserver spinlocked and nothing got logged. Eion had another late night, trying to trace through and figure out: which customer was causing the crash, and what were they doing?

Many hours of debugging and crying later, Eion tracked down the problem to some code which tracked sales or exchanges of product- transactions which might not have a price when they occur.

$query .= odbc_iif("SUM(price) = 0", 0, "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

odbc_iif was one of their abstractions- an iif function, aka a ternary. In this case, if the SUM(price) isn’t zero, then divide the SUM(priceact) by the number of non-zero prices in the price column. This ensures that there is at least one non-zero price entry. Then they can average out the actual price across all those non-zero price entries, ignoring all the “free” exchanges.

This line wasn’t failing all the time, which added to Eion’s frustration. It failed when two very specific things were true. The first factor was the database- it only failed in FoxPro. The second factor was the data- it only failed when the first product in the resultset had no entries with a price greater than zero.

Why? Well, we have to think about where FoxPro comes from. FoxPro’s design goal was to be a data-driven programming environment for non-programmers. Like a lot of those environments, it tries its best not to yell at you about types. In fact, if you’re feeding data into a table, you don’t even have to specify the type of the column- it will pick the “correct” type by looking at the first row.

So, look at the iif again. If the SUM(price) = 0 we output 0 in our resultset. Guess what FoxPro decides the datatype must be? A single digit number. If the second row has an average price of, say, 9.99, that’s not a single digit number, and FoxPro explodes and takes down everything else with it.

Eion needed to fix this in a way that didn’t break their “database agnostic” code, and thus would continue to work in FoxPro and all the other databases, with at least predictable errors (that don’t crash the whole system). In the moment, suffering through the emergency, Eion changed the code to this:

$query .= "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

Without the zero check, any products which had no sales would trigger a divide-by-zero error. This was a catchable, trappable error, even in FoxPro. Eion made the change in production, got the system back up and their customers happy, and then actually put the change in source control with a very apologetic commit message.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.4 on CRAN: Adding a Simple Numeric Vector Class

Another release of the tidyCpp package arrived on CRAN earlier today. The packages offers a clean C++ layer on top of the C API for R which aims to make its use a little easier and more consistent.

The vignette has been extended once more with a new example, and added a table of contents. The package now supports a (truly minimal) C++ class for a numeric vector which is the most likely use case.

The NEWS entry follows and includes the 0.0.3 release earlier in the year which did not get the usual attention of post-release blog post.

Changes in tidyCpp version 0.0.4 (2021-09-05)

  • Minor updates to DESCRIPTION

  • New snippet rollminmaxExample with simple vector use

  • New class NumVec motivated from rolling min/max example

  • Expand the vignette with C++ example based on NumVec

  • Add a table of contents to the vignette

Changes in tidyCpp version 0.0.3 (2021-03-07)

  • Switch CI use to r-ci

  • Protect nil object definition

Thanks to my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


David BrinShould facts and successes matter in economics? Or politics?

The rigid stances taken by today’s entire- “right” and farthest*-“left” are both not-sane and un-American, violating a longstanding principle of yankee pragmatism that can be summarized: 

“I am at-most 90% right and my foes (except confederates) are at most 99% wrong.” (The Confederacy was - and remains - 100% evil.) 

That principle continues: 
“Always a default should first be to listen, negotiate and learn… before reluctantly and temporarily concluding that I must smack down your foaming-rabid, hysterically unreasonable ass.” 

And yes, my use of the “left/right” terminology is ironic, since adherents of that hoary-simplistic-stupid metaphor could not define “left” or “right” if their own lives depended on it!  

Nowhere is this more valid than in the ‘dismal science’ of economics. Some things are proved: Adam Smith was wise and a good person, who pointed out that true cooperation and productive, positive-sum competition cannot thrive without each other, or the involvement of every empowered participant in an open society. The crux of Smithian liberalism was "stop wasting talent!" Every poor child who isn't lifted to an even playing field is a crime against BOTH any decent conscience AND any chance of truly competitive enterprise. Hence, "social programs" to uplift poor kids to a decent playing field are not "socialism." They are what any true believer in market competition... or decency... would demand.

Also proved: Keynsianism mostly works, when it is applied right, uplifting the working class and boosting money velocity, while its opposite - Supply Side/Thatcherism - was absolutely wrong, top to bottom and in every detail, without a single positive effect or outcome or successful prediction to its jibbering crazy credit. Again "Supply Side" is nothing but an incantation cult to excuse a return to feudalism. (I invite wagers!)

Competition is good and creative of prosperity, but only when cooperatively regulated and refereed, like in sports, to thwart relentlessly inevitably human temptation for the rich and powerful to cheat! (Bet me on that, too. On my side is evidence from 99% of 6000 years of human history.)

If you’d like to Explore this non-left, non-right, non-dogmatic approach to using what actually works, getting the best from both competition and cooperation, you can do worse than start at the site that conveys the real Adam Smith. It shines light on how the rich and elites are often the very last people who should be trusted with capitalism! 

Read the Evonomics site! For example: “Eight Reasons Why Inequality Ruins the Economy.”  and “To Tackle Inequality, We Need to Start Talking About Where Wealth Comes From. The Thatcherite narrative on wealth creation has gone unchallenged for decades.”

== Doubling down on tax cuts ==

The ability of cultists to double down on the blatantly disproved is now our greatest danger. As in this dazzlingly evil-stupid call for more tax cuts for the rich.

Oh, if only we had my 'disputations arenas" or some other top venue for challenging spell-weavers to back up their magical incantations with cash! This doubling (sextupling!)-down on Supply Side 'voodoo' promotes what is by now a true and proved psychosis. Utter refusal by the tightly-disciplined Republican political caste to face truth about their 40 years of huge, deca-trillion dollar experiments in priming the industrial pump at the top.

You need to hammer this. Not one major prediction made for that "theory" ever came true. Not one, ever.

- No flood of investment in productive industry or R&D. (As Adam Smith described, the rich pour most of their tax largesse into passive rentier properties, stock buybacks, CEO packages, capital preservation, asset bubbles and now frippy "nonexistent" artworks, reducing available capital and driving down money velocity.)

- No surge of economic activity, leading to tax revenue.

- No erasure of debt. In fact, deficit curves reveal Democrats are always more fiscally prudent than Republicans, and that is always, always. (Escrow wager stakes on that, or admit you are dogmatist cowards.) Republican Congresses - the laziest in US history - have had one priority, to protect this vampire suck from America's carotid arteries. 

In contrast, Keynsian interventions - when balanced by pay-downs in good times (e.g. Jerry Brown, Clinton, Newsom) - nearly always raise money velocity, investment, tax revenue, production and middle class health, while turning debt ratios downward. Oh, and doing something for our kids, like infrastructure, Earth-saving and social justice.

I know several "economics pundits" who are very well aware of all this and admit it privately, but are terrified of their oligarchic masters. They do not dispute these facts, in private. They admit that the rich will have to go back to paying the legitimate shares that our parents in the Greatest Generation assigned to them, rates that correlated with the best growth and middle class health increases in the history of the human species. 

And yet, they are paid to drag their feet and distract from our desperate need to send today's GOP to the showers. They do their job for oligarchy by distracting with endless denunciations of the Federal Reserve. shouting "Fed! Fed-Fed-Fed!" Or... "squirrel!"

The longer we put off the Great Reset, back to the Greatest Generation's successful social contract -- that stymied Marx by co-opting the workers into a rising middle class -- the angrier will be the ensuing middle class and working class demands, and the more the right will succeed at what seems to be their main project... resuscitating Marx from the grave and setting his scenarios back in motion.

Put it off long enough? It will be too late for Rooseveltism. Tumbrels will roll.

I don't want that. These fools are acting like they do. 

Or else, as if they are being blackmailed. And that, alas, is the likeliest explanation of all.


Cryptogram History of the HX-63 Rotor Machine

Jon D. Paul has written the fascinating story of the HX-63, a super-complicated electromechanical rotor cipher machine made by Crypto AG.

Worse Than FailureError'd: Just Doer It

Testing in production again, here's five fails for the fifth day of the week. Or the sixth. Or is it the fourth?

Anonymous Ignoronymous declares "Dude! Science tests were never popular!"



While pronouncedly polyonymous Scott Christian Simmons proffers "How do doers forget to do their ONE JOB"



And probably pseudonymous Like H. explains it's because "More doing, less testing"



But querulous Quentin quibbles "Only shown in test mode. Oh well, what the hell! Nothing like testing the full user base. "



Finally, longtime reader, rare contributor Robin Z. shares: "While trying to see if tickets for UK Games Expo are available yet, I saw they had a shop section - but nothing to see there, apart from some good old testing in production."



[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Krebs on SecurityGift Card Gang Extracts Cash From 100k Inboxes Daily

Some of the most successful and lucrative online scams employ a “low-and-slow” approach — avoiding detection or interference from researchers and law enforcement agencies by stealing small bits of cash from many people over an extended period. Here’s the story of a cybercrime group that compromises up to 100,000 email inboxes per day, and apparently does little else with this access except siphon gift card and customer loyalty program data that can be resold online.

The data in this story come from a trusted source in the security industry who has visibility into a network of hacked machines that fraudsters in just about every corner of the Internet are using to anonymize their malicious Web traffic. For the past three years, the source — we’ll call him “Bill” to preserve his requested anonymity — has been watching one group of threat actors that is mass-testing millions of usernames and passwords against the world’s major email providers each day.

Bill said he’s not sure where the passwords are coming from, but he assumes they are tied to various databases for compromised websites that get posted to password cracking and hacking forums on a regular basis. Bill said this criminal group averages between five and ten million email authentication attempts daily, and comes away with anywhere from 50,000 to 100,000 of working inbox credentials.

In about half the cases the credentials are being checked via “IMAP,” which is an email standard used by email software clients like Mozilla’s Thunderbird and Microsoft Outlook. With his visibility into the proxy network, Bill can see whether or not an authentication attempt succeeds based on the network response from the email provider (e.g. mail server responds “OK” = successful access).

You might think that whoever is behind such a sprawling crime machine would use their access to blast out spam, or conduct targeted phishing attacks against each victim’s contacts. But based on interactions that Bill has had with several large email providers so far, this crime gang merely uses custom, automated scripts that periodically log in and search each inbox for digital items of value that can easily be resold.

And they seem particularly focused on stealing gift card data.

“Sometimes they’ll log in as much as two to three times a week for months at a time,” Bill said. “These guys are looking for low-hanging fruit — basically cash in your inbox. Whether it’s related to hotel or airline rewards or just Amazon gift cards, after they successfully log in to the account their scripts start pilfering inboxes looking for things that could be of value.”

A sample of some of the most frequent search queries made in a single day by the gift card gang against more than 50,000 hacked inboxes.

According to Bill, the fraudsters aren’t downloading all of their victims’ emails: That would quickly add up to a monstrous amount of data. Rather, they’re using automated systems to log in to each inbox and search for a variety of domains and other terms related to companies that maintain loyalty and points programs, and/or issue gift cards and handle their fulfillment.

Why go after hotel or airline rewards? Because these accounts can all be cleaned out and deposited onto a gift card number that can be resold quickly online for 80 percent of its value.

“These guys want that hard digital asset — the cash that is sitting there in your inbox,” Bill said. “You literally just pull cash out of peoples’ inboxes, and then you have all these secondary markets where you can sell this stuff.”

Bill’s data also shows that this gang is so aggressively going after gift card data that it will routinely seek new gift card benefits on behalf of victims, when that option is available.  For example, many companies now offer employees a “wellness benefit” if they can demonstrate they’re keeping up with some kind of healthy new habit, such as daily gym visits, yoga, or quitting smoking.

Bill said these crooks have figured out a way to tap into those benefits as well.

“A number of health insurance companies have wellness programs to encourage employees to exercise more, where if you sign up and pledge to 30 push-ups a day for the next few months or something you’ll get five wellness points towards a $10 Starbucks gift card, which requires 1000 wellness points,” Bill explained. “They’re actually automating the process of replying saying you completed this activity so they can bump up your point balance and get your gift card.”

The Gift Card Gang’s Footprint

How do the compromised email credentials break down in terms of ISPs and email providers? There are victims on nearly all major email networks, but Bill said several large Internet service providers (ISPs) in Germany and France are heavily represented in the compromised email account data.

“With some of these international email providers we’re seeing something like 25,000 to 50,000 email accounts a day get hacked,” Bill said.  “I don’t know why they’re getting popped so heavily.”

That may sound like a lot of hacked inboxes, but Bill said some of the bigger ISPs represented in his data have tens or hundreds of millions of customers.

Measuring which ISPs and email providers have the biggest numbers of compromised customers is not so simple in many cases, nor is identifying companies with employees whose email accounts have been hacked.

This kind of mapping is often more difficult than it used to be because so many organizations have now outsourced their email to cloud services like Gmail and Microsoft Office365 — where users can access their email, files and chat records all in one place.

“It’s a little complicated with Office 365 because it’s one thing to say okay how many Hotmail connections are you seeing per day in all this credential-stuffing activity, and you can see the testing against Hotmail’s site,” Bill said. “But with the IMAP traffic we’re looking at, the usernames being logged into are any of the million or so domains hosted on Office365, many of which will tell you very little about the victim organization itself.”

On top of that, it’s also difficult to know how much activity you’re not seeing.

Looking at the small set of Internet address blocks he knows are associated with Microsoft 365 email infrastructure, Bill examined the IMAP traffic flowing from this group to those blocks. Bill said that in the first week of April 2021, he identified 15,000 compromised Office365 accounts being accessed by this group, spread over 6,500 different organizations that use Office365.

“So I’m seeing this traffic to just like 10 net blocks tied to Microsoft, which means I’m only looking at maybe 25 percent of Microsoft’s infrastructure,” Bill explained. “And with our puny visibility into probably less than one percent of overall password stuffing traffic aimed at Microsoft, we’re seeing 600 Office accounts being breached a day. So if I’m only seeing one percent, that means we’re likely talking about tens of thousands of Office365 accounts compromised daily worldwide.”

In a December 2020 blog post about how Microsoft is moving away from passwords to more robust authentication approaches, the software giant said an average of one in every 250 corporate accounts is compromised each month. As of last year, Microsoft had nearly 240 million active users, according to this analysis.

“To me, this is an important story because for years people have been like, yeah we know email isn’t very secure, but this generic statement doesn’t have any teeth to it,” Bill said. “I don’t feel like anyone has been able to call attention to the numbers that show why email is so insecure.”

Bill says that in general companies have a great many more tools available for securing and analyzing employee email traffic when that access is funneled through a Web page or VPN, versus when that access happens via IMAP.

“It’s just more difficult to get through the Web interface because on a website you have a plethora of advanced authentication controls at your fingertips, including things like device fingerprinting, scanning for http header anomalies, and so on,” Bill said. “But what are the detection signatures you have available for detecting malicious logins via IMAP?”

Microsoft declined to comment specifically on Bill’s research, but said customers can block the overwhelming majority of account takeover efforts by enabling multi-factor authentication.

“For context, our research indicates that multi-factor authentication prevents more than 99.9% of account compromises,” reads a statement from Microsoft. “Moreover, for enterprise customers, innovations like Security Defaults, which disables basic authentication and requires users to enroll a second factor, have already significantly decreased the proportion of compromised accounts. In addition, for consumer accounts, adding a second authentication factor is required on all accounts.”

A Mess That’s Likely to Stay That Way

Bill said he’s frustrated by having such visibility into this credential testing botnet while being unable to do much about it. He’s shared his data with some of the bigger ISPs in Europe, but says months later he’s still seeing those same inboxes being accessed by the gift card gang.

The problem, Bill says, is that many large ISPs lack any sort of baseline knowledge of or useful data about customers who access their email via IMAP. That is, they lack any sort of instrumentation to be able to tell the difference between legitimate and suspicious logins for their customers who read their messages using an email client.

“My guess is in a lot of cases the IMAP servers by default aren’t logging every search request, so [the ISP] can’t go back and see this happening,” Bill said.

Confounding the challenge, there isn’t much of an upside for ISPs interested in voluntarily monitoring their IMAP traffic for hacked accounts.

“Let’s say you’re an ISP that does have the instrumentation to find this activity and you’ve just identified 10,000 of your customers who are hacked. But you also know they are accessing their email exclusively through an email client. What do you do? You can’t flag their account for a password reset, because there’s no mechanism in the email client to affect a password change.”

Which means those 10,000 customers are then going to start receiving error messages whenever they try to access their email.

“Those customers are likely going to get super pissed off and call up the ISP mad as hell,” Bill said. “And that customer service person is then going to have to spend a bunch of time explaining how to use the webmail service. As a result, very few ISPs are going to do anything about this.”

Indictators of Compromise (IoCs)

It’s not often KrebsOnSecurity has occasion to publish so-called “indicators of compromise” (IoC)s, but hopefully some ISPs may find the information here useful. This group automates the searching of inboxes for specific domains and trademarks associated with gift card activity and other accounts with stored electronic value, such as rewards points and mileage programs.

This file includes the top inbox search terms used in a single 24 hour period by the gift card gang. The numbers on the left in the spreadsheet represent the number of times during that 24 hour period where the gift card gang ran a search for that term in a compromised inbox.

Some of the search terms are focused on specific brands — such as Amazon gift cards or Hilton Honors points; others are for major gift card networks like CashStar, which issues cards that are white-labeled by dozens of brands like Target and Nordstrom. Inboxes hacked by this gang will likely be searched on many of these terms over the span of just a few days.

LongNowThe Paleoclimate & You: How Ancient Climatological Data Helps Us Understand Modern Climate Change

Sediment cores like these can help uncover the deep climatological history of the earth and provide insight into our climate futures. Courtesy of Hannes Grobe AWI/CRP

The 02021 Working Group I contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, widely known as the 02021 IPCC report, is a massive document. Drawing on more than 14,000 studies, the report synthesizes the state of contemporary climate science. It paints a dire picture of the possible futures for earth’s climate, predicting warming of at least 2.5 degrees celsius by 02100 barring a rapid drawdown in carbon dioxide emissions to the atmosphere.

In outlining its conclusions on the next century of the earth’s climate, the IPCC report uses a large amount of historical data. Much of this data is of relatively recent vintage, reaching back through the last 150 to 200 years of verifiable, scientifically collected observations. From these data points, the report can conclude, for example, that over the last decade Arctic sea ice has contracted to its lowest average area since at least 01850. Yet the IPCC report also draws on a far older dataset: the paleoclimate record. 

The paleoclimate record is not contained in any one archive, and stretches far beyond human recorded history. It is found in nature’s own memory: sediment, peat, and glacier ice records that stretch back more than 100,000 years in earth’s history in some cases. In a recent interview following the publication of the IPCC report, lead author Kim Cobb, a climate scientist at the Georgia University of Technology, told Scientific American that paleoclimate data allowed the IPCC to “capture the full breadth of natural variability in Earth’s climate system” in a way that human records over the past 150 years simply could not.

The tell-tale ash layer shown in this Antarctic ice core indicates the Toba supervolcano eruption, which occurred in northern Sumatra 75,000 years ago. Courtesy of Guillaume Dargaud 

Paleoclimate cores help us understand what the ecological and geological consequences of 2+ celsius degree warming, unprecedented in recent history, would look like in the long-term. By revisiting sediment core slices dated back to the last interglacial period 125,000 years ago, we can observe the conditions that an extended period of climate 2 degrees above modern preindustrial earth lead to: a completely melted ice sheet, with sea levels five to ten meters above modern levels.

Of course, Cobb notes, “None of Earth’s past warm periods is an appropriate analogue for what we’re seeing today.” The past interglacial period was reached over the course of thousands of years of gradual warming — a snail’s pace compared to the rapid change in climate since the dawn of the industrial revolution.

Learn More

  • Read Cobb’s full interview with Scientific American’s Katarina Zimmer
  • Read the executive summary of the IPCC report, including its paleoclimate findings
  • For another creatively-sourced analysis of historical climate data, watch historian Brian Fagan’s 02007 Seminar on how “We Are Not the First to Suffer Through Climate Change,” focusing on how vineyard harvest records from 00800-01250 CE show the warming climate of medieval Europe.

Worse Than FailureCodeSOD: Dangerous Tools for Dangerous Users

Despite being a programmer, I make software that goes in big things, which means that my workplace frequently involves the operation of power tools. My co-workers… discourage me from using the power tools. I'm not the handiest of people, and thus watching me work is awkward, uncomfortable, and creates a sense of danger. I'm allowed to use impact drivers and soldering irons, but when it comes time to use some of the more complex power saws, people gently nudge me aside.

There are tools that are valuable, but are just dangerous in the hands of the inexperienced. And even if they don't hurt themselves, you might end up boggling at the logic which made them use the tool that way. I'm talking, of course, about pre-compiler macros.

Lucas inherited some C++ code that depends on some macros, like this one:

#define CONDITION_CHECK(condition, FailureOp) if(!(condition)){FailureOp;}

This isn't, in-and-of-itself, terrible. It could definitely help with readability, especially with the right conditions. So what does it look like in use?

switch( transaction_type ) { case TYPE_ONE: case TYPE_TWO: return new CTransactionHandlerClass(/*stuff*/); default: break; } CONDITION_CHECK( false, return NULL );


Even if we generously wanted to permit the use of literal true/false flags as some sort of debugging flag, this clearly isn't one of the situations where that makes sense. We always want to return NULL. There's never a time where we'd flip that flag to true to not return NULL.

CSpecificTransactionClass* pTrans = dynamic_cast< CAbstractTransactionClass* >( command ); if ( pTrans ) { CONDITION_CHECK( pTrans, return false); //stuff… }

So, we do a dynamic cast, which if it fails is going to return a null value. So we have to check to see if it succeeded, which is what our if statement is doing. Once we know it succeeded, we immediately check to see if it failed.

In this case, that CONDITION_CHECK is just useless. But why be useless when you can also be too late?

CBaseCustomer *pCustomer; CRetailCustomer *pRetail = new CRetailCustomer; pCustomer = pRetail; pCustomer->SetName( pName ); CONDITION_CHECK( pRetail, return false );

So, here, we have a safety check… after we interact with the possibly-not-intitialized object. Better late than never, I suppose.

Once, at work, someone handed me a bit of lumber and a hand saw, and told me to cut it. So I started, and after about a minute of watching me fail, they pointed out that the way I'd supported the lumber was causing it to bind the saw and pinch, because I had no clue what I was doing.

Which is to say: I think the developer writing and using this macro is much like me and the handsaw. It should be simple, it should be obvious, but when you have no clue what you're doing, you might not hurt yourself, but you'll make your co-workers laugh at you.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on Security15-Year-Old Malware Proxy Network VIP72 Goes Dark

Over the past 15 years, a cybercrime anonymity service known as VIP72 has enabled countless fraudsters to mask their true location online by routing their traffic through millions of malware-infected systems. But roughly two weeks ago, VIP72’s online storefront — which ironically enough has remained at the same U.S.-based Internet address for more than a decade — simply vanished.

Like other anonymity networks marketed largely on cybercrime forums online, VIP72 routes its customers’ traffic through computers that have been hacked and seeded with malicious software. Using services like VIP72, customers can select network nodes in virtually any country, and relay their traffic while hiding behind some unwitting victim’s Internet address.

The domain Vip72[.]org was originally registered in 2006 to “Corpse,” the handle adopted by a Russian-speaking hacker who gained infamy several years prior for creating and selling an extremely sophisticated online banking trojan called A311 Death, a.k.a. “Haxdoor,” and “Nuclear Grabber.” Haxdoor was way ahead of its time in many respects, and it was used in multiple million-dollar cyberheists long before multi million-dollar cyberheists became daily front page news.

An ad circa 2005 for A311 Death, a powerful banking trojan authored by “Corpse,” the administrator of the early Russian hacking clique Prodexteam. Image: Google Translate via

Between 2003 and 2006, Corpse focused on selling and supporting his Haxdoor malware. Emerging in 2006, VIP72 was clearly one of his side hustles that turned into a reliable moneymaker for many years to come. And it stands to reason that VIP72 was launched with the help of systems already infected with Corpse’s trojan malware.

The first mention of VIP72 in the cybercrime underground came in 2006 when someone using the handle “Revive” advertised the service on Exploit, a Russian language hacking forum. Revive established a sales presence for VIP72 on multiple other forums, and the contact details and messages shared privately by that user with other forum members show Corpse and Revive are one and the same.

When asked in 2006 whether the software that powered VIP72 was based on his Corpse software, Revive replied that “it works on the new Corpse software, specially written for our service.”

One denizen of a Russian language crime forum who complained about the unexplained closure of VIP72 last month said they noticed a change in the site’s domain name infrastructure just prior to the service’s disappearance. But that claim could not be verified, as there simply are no signs that any of that infrastructure changed prior to VIP72’s demise.

In fact, until mid-August VIP72’s main home page and supporting infrastructure had remained at the same U.S.-based Internet address for more than a decade — a remarkable achievement for such a high-profile cybercrime service.

Cybercrime forums in multiple languages are littered with tutorials about how to use VIP72 to hide one’s location while engaging in financial fraud. From examining some of those tutorials, it is clear that VIP72 is quite popular among cybercriminals who engage in “credential stuffing” — taking lists of usernames and passwords stolen from one site and testing how many of those credentials work at other sites.

Corpse/Revive also long operated an extremely popular service called check2ip[.]com, which promised customers the ability to quickly tell whether a given Internet address is flagged by any security companies as malicious or spammy.

Hosted on the same Internet address as VIP72 for the past decade until mid-August 2021, Check2IP also advertised the ability to let customers detect “DNS leaks,” instances where configuration errors can expose the true Internet address of hidden cybercrime infrastructure and services online.

Check2IP is so popular that it has become a verbal shorthand for basic due diligence in certain cybercrime communities. Also, Check2IP has been incorporated into a variety of cybercrime services online — but especially those involved in mass-mailing malicious and phishous email messages.

Check2IP, an IP reputation service that told visitors whether their Internet address was flagged in any spam or malware block lists.

It remains unclear what happened to VIP72; users report that the anonymity network is still functioning even though the service’s website has been gone for two weeks. That makes sense since the infected systems that get resold through VIP72 are still infected and will happily continue to forward traffic so long as they remain infected. Perhaps the domain was seized in a law enforcement operation.

But it could be that the service simply decided to stop accepting new customers because it had trouble competing with an influx of newer, more sophisticated criminal proxy services, as well as with the rise of “bulletproof” residential proxy networks. For most of its existence until recently, VIP72 normally had several hundred thousand compromised systems available for rent. By the time its website vanished last month — that number had dwindled to fewer than 25,000 systems globally.

Rondam RamblingsGame over for Roe v. Wade -- and constitutional rights

You may not have heard, but the Supreme Court overturned Roe v. Wade yesterday.  They did it covertly, by failing to act on Texas's devilishly clever end-run around the Constitution.  And in failing to act, they have effectively terminated the rule of law in the United States and opened a Pandora's box of vigilanteism powered by civil lawsuits, against which the Constitution can offer no

Kevin RuddStatement: COVID-19 Emergency in Indigenous Communities

Co-Chair, National Apology Foundation



1 September 2021

From the very outset of this pandemic, our First Australians and their advocates have warned about the risk of COVID-19 carving a path of destruction through regional and remote communities.

The distance of these communities from suitably equipped medical facilities compounds the reality that Indigenous Australians are already more vulnerable to this virus than most, given the persistent gap in health outcomes.

On Monday, the Guardian published a letter sent from the Maari Ma Health Aboriginal Corporation to the Morrison Government some 18 months ago. The letter highlighted “grave fears” over inadequate protections against COVID-19 in communities like Wilcannia in NSW.

Now, more than 10 per cent of Wilcannia’s population has contracted COVID-19. Other communities across regional NSW are also reporting worrying outbreaks. The NSW Aboriginal Health and Medical Research Council fears that Indigenous COVID cases could reach 1,000 this week.

Despite Indigenous Australians being a “priority group”, the Indigenous vaccination rate is more than 17 per cent lower than the non-Indigenous vaccination rate in Western and Far West NSW.

Health providers and community leaders have done extremely well to save lives throughout the pandemic, but they need more support.

First Australians and their advocates are calling for urgent action to address these outbreaks and protect their communities from future outbreaks. I stand with them.

The Prime Minister is fond of talking about reopening the economy when 80 per cent of Australians nationwide are vaccinated. I ask him: does that mean 80 per cent coverage in every regional and remote community as well?

The NSW and federal governments must as a matter of urgency take all necessary means to contain the risk of further spread in remote communities. If they fail, the consequences could be catastrophic.

These measures should include an immediate increase in vaccine supply, even if this is made more difficult by the federal government’s failure to obtain adequate supplies of vaccines nationally; mobile vaccination teams and more comprehensive public information campaigns, including in Indigenous languages.

The post Statement: COVID-19 Emergency in Indigenous Communities appeared first on Kevin Rudd.

LongNowThe Historical Land Practices Behind California’s Fires

A suspension bridge-- the sky behind it is orange due to wildfires
The skies of the Bay Area turned orange in September 02020, as the smoke from the complex of wildfires throughout the Bay overwhelmed the sky. Image courtesy of Long Now Speaker and photographer Christopher Michel.

Here at Long Now’s offices in San Francisco, we are in the midst of California’s fire season. The fire season is an ever-expanding span of time typically judged to peak between August and October, though the California Department of Forestry and Fire Protection has warned for years of the dawn of a nearly year-round season. The Dixie Fire, now almost half contained, spread over 700,000 acres of land in the Northeast of the state, while the Monument Fire to its West still rages over the comparatively small span of 150,000 acres. 

Fire has always been a part of California’s ecology. The plant life of the state, from the dusty chaparral of the central and southern coasts to the giant sequoias of the state’s redwood forests, has adapted to millions of years of natural, lightning-sparked fire through fire-germinated and heat-resistant seeds. Yet blazes of this size, frequency, and persistence throughout the year are unprecedented, one-in-a-century events happening on a near-annual basis. 

Controlled burns once allowed for trees like this sequoia to grow, and may be key for reducing the risk of more disastrous, uncontrolled fires. Image courtesy of Matt Holly/NPS

The causes of the fire season’s intensification in recent years are many. Climate change is a driver for many of them, from the increasingly dry climate of the state to the earlier beginning of snowmelt from the Sierras. Yet an older set of human decisions about the earth also plays a key role. In 02016, ecologist Stephen Pyne gave a Seminar at Long Now about the history of humanity’s relationship with fire, including the “century of misdirection about wildfire” brought about by the exportation of European norms on fire safety to the drier, fire-germinated ecosystems of the Americas and Asia.

In The Drift, anthropologist Jordan Thomas lays out how the shift in California specifically from Indigenous Californian fire practices, which typically included controlled, intentional burns, to European and later American fire suppression has increased the size and damage caused by wildfires.

As the 20th century ticked by, forests became tinderboxes. Just as fuel accumulated in the trees, carbon accumulated in the atmosphere, and with each slight temperature increase, high-risk burn zones spread outwards from the mountains. Meanwhile, populations spread farther from urban cores, increasing the chance of fire ignitions.These factors have converged to ensure that fires, when they do occur, are now explosive. Forest managers are beginning to backtrack against the Euro-American legacy of fire suppression inherited from Arrillaga and his heirs, but, as fire seasons expand and winter months contract, they may be running out of time.

Fire suppression policies have shaped California’s ecosystems since the days of Spanish colonialism in the region, and only strengthened over the course of the prevention-focused twentieth century regime of the U.S. Forest Service. In recent years, land stewardship and Indigenous rights groups in Northern California and elsewhere have pushed for greater investment in traditional fire management practices, drawing on millennia of successful human-ecosystem management to deal with a modern crisis.

Learn More:

  • For the full story, which includes first hand accounts of firefighting through the Dolan Fire of August 02020, read Thomas’ piece in The Drift here.
  • Ecologist Laura Cunningham’s 02011 Seminar about Ten Millennia of California Ecology provides a long and broad view of the ecological systems that have shaped Californian life
  • Science Journalist and Historian Charles C. Mann’s 02012 Seminar about Living in the Homogenocene grapples with what he terms the “eco-convulsions” of the past 500 years, including mass deforestation events.

Worse Than FailureCodeSOD: Maintaining Yourself

When moving from one programming language to another, it's easy to slip into idioms that might be appropriate in one, but are wildly out of place in another. Tammy's company has some veteran developers who spent most of their careers developing in COBOL, and now' they're developing in C#. As sufficiently determined programmers, they're finding ways to write COBOL in C#. Bad COBOL.

Methods tend towards long- one thousand lines of code isn't unusual. Longer methods exist. Every method starts with a big long pile of variable declarations. Objects are used more as namespaces than anything relating to object-oriented design. So much data is passed around as 2D arrays, because someone liked working with data like it lived in a table.

But this method comes from one specific developer, who had been with the company for 25+ years. Tammy noticed it because, well, it's short, and doesn't have much by way of giant piles of variables. It's also, in every way, wrong.

public static bool _readOnlyMode; public static bool _hasDataChanged; //... public bool HasDataChanged() { try { if (_readOnlyMode == false) { _hasDataChanged = true; btnPrint.Enabled = true; btnPrintPreview.Enabled = true; btnSave.Enabled = true; btnCancel.Enabled = true; } } catch (Exception ex) { } }

So, first, we note that _reandOnlyMode and _hasDataChanged are static, reinforcing the idea that this class isn't an object, but it's a namespace for data. Except the method, HasDataChanged isn't static, which is going to be fun for everyone else trying to trace through how data and values change. Note, also, that these names use the _ prefix, a convention used to identify private variables, but these are explicitly public.

The method is marked as a bool function, but never returns a value. The name and signature imply that this is a simple "question" method: has the data changed? True or false. But that's not what this method does at all. It checks if _readOnlyMode is false, and if it is, we enable a bunch of controls and set _hasDataChanged to true. Instead of returning a value, we are just setting the global public static variable _hasDataChanged.

And the whole thing is wrapped up in an exception handler that ignores the exception. Which, either none of this code could ever throw an exception, making that pointless, or there's no guarantee that all the btn… objects actually exist and there's a chance of null reference exceptions.

Tammy has inherited this application, and her assignment is to perform "maintenance". This means she doesn't touch the code until something breaks, then she spends a few days trying to replicate the error and then pick apart the thicket of global variables and spaghetti code to understand what went wrong, then usually instead of changing the code, she provides the end user with instructions on how to enter that data in a way that won't cause the application to crash, or manually updates a database record that's causing the crash. "Maintenance" means "keep in the same way", not "fix anything".

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram Zero-Click iPhone Exploits

Citizen Lab is reporting on two zero-click iMessage exploits, in spyware sold by the cyberweapons arms manufacturer NSO Group to the Bahraini government.

These are particularly scary exploits, since they don’t require to victim to do anything, like click on a link or open a file. The victim receives a text message, and then they are hacked.

More on this here.

Sociological ImagesBack to Basics: Selling Sociology

Photo courtesy Letta Page

Despite, well, everything, we are trying to get back into the classroom as much as we can at the start of a new academic year. I am scheduled to teach Introduction to Sociology for the first time this coming spring and planning the course this fall.

Whether in person or remote, I will be ecstatic to introduce our field to a new batch of students — to show them what sociologists do, how we work, and how we think about the world. Thinking about those foundations, the start of an academic year is a great time to come back and ask “what, exactly, are we doing?”

I have been thinking a lot about that question in our current chaotic moment and in the context of sociology’s changing role in higher eduction. This chart made by Philip Cohen keeps coming to mind:

Source: Philip Cohen – original post at Family Inequality

There are a lot of reasons for the decline in sociology majors, and reflections on our purpose as a field are not new at all (examples hereherehere, and on the social sciences in general here). We all bring different ideas about our common methods and missions, and our field has plenty of room for many different sociologies. I like big-tent approaches like the one here at The Society Pages.

For newcomers, though, that range makes it hard to grasp what sociologists actually do, and that makes it tough to do right by our students. At some point, someone is going to ask a new sociology major the dreaded question: “what do you do with that?” I think we have a responsibility to model ways to answer that question clearly and directly, even if we don’t want to lock students into narrow careerist ambitions. A wonky answer about ~society~ doesn’t necessarily help them.

That’s why I love these recent podcast episodes with Zeynep Tufekci. In each case, the hosts ask her how she got so much right about COVID-19 so early in the pandemic. In both, her answers explicitly show us how insights about relationships, organizations, and stigma helped to guide her thinking. These interviews are a model for showing us what sociological thinking actually can do to address pressing issues.

Far too often, our institutions miss out on the benefits of thinking about social systems and relationships in this way. Sources like these help to sell sociology to our students, and they will be a big part of my upcoming intro course. In the coming weeks, we’ll be running more posts that focus on going back to basics for newcomers in sociology, including updates to our “What’s Trending?” series and more content for the intro classroom. Stay tuned, and share how you sell sociology to your students!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at

Worse Than FailureCodeSOD: Mark it Zero

Sometimes, we get a submission and the submitter provides absolutely no context. Sometimes, the snippet is even really short, or the problem is "subtle" in a way that means we can't spot it when skimming the inbox.

Which is why this submission from "Bus Error" has been sitting in the inbox for seven years, and why just today, as I was skimming old submissions, I finally saw what was great about this little block of C-code.

struct sockaddr_in serv_addr; memset(&serv_addr, '0', sizeof(serv_addr));

The goal of this block of code is to ensure that the serv_addr variable is set to all zeros before it gets used elsewhere. But instead of doing that, this sets it to all '0'- the character, not the number. So instead, this initializes all the memory to 0x30.

Now, given that the first field in a sockaddr_in is the address family, which must be AF_INET on Linux, this is definitely not going to work. Even if it was actually memsetting to 0, it still wouldn't be a correct thing to do in this case. Plus, there's a whole family of methods meant to initialize this structure to the correct usable values, so this memset shouldn't be there at all.

Without context, I suspect this was programming by incantation: someone learned the hard way that variables in C don't start with a known value, so they decided to make sure that every variable they declared got zeroed out. It was a magic spell that solved a problem they didn't understand.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram More on Apple’s iPhone Backdoor

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

Cryptogram More Military Cryptanalytics, Part III

Late last year, the NSA declassified and released a redacted version of Lambros D. Callimahos’s Military Cryptanalytics, Part III. We just got most of the index. It’s hard to believe that there are any real secrets left in this 44-year-old volume.

Kevin RuddStatement: Vale Senator Alex Gallacher

I am deeply saddened to learn that Senator Alex Gallacher has passed away.

Throughout his career, Senator Gallacher was a robust advocate for Australian working families, driven by a profound belief that collective action could ensure safe, secure and well-paying jobs for all.

I am especially grateful for Senator Gallacher’s efforts to defend workers from assaults on their wages and conditions (including his role in the fight against Work Choices), and for sticking up for their right to retire in dignity by protecting and enhancing compulsory superannuation.

Senator Gallacher will be sorely missed by his fellow transport workers. He brought to the nation’s parliament the hands-on experience of their industry and was an unwavering advocate for them, especially when it came to their safety at work.

I join with Anthony Albanese and the entire Labor family in expressing my condolences to his wife, Paola, and all their loved ones.

The post Statement: Vale Senator Alex Gallacher appeared first on Kevin Rudd.

MELinks August 2021

Sciencealert has an interesting article on a game to combat misinformation by “microdosing” people [1]. The game seemed overly simplistic to me, but I guess I’m not the target demographic. Research shows it to work.

Vice has an interesting and amusing article about mass walkouts of underpaid staff in the US [2]. The way that corporations are fighting an increase in the minimum wage doesn’t seem financially beneficial for them. An increase in the minimum wage means small companies have to increase salaries too and the ratio of revenue to payroll is probably worse for small companies. It seems that companies like McDonalds make oppressing their workers a higher priority than making a profit.

Interesting article in Vice about how the company Shot Spotter (which determines the locations of gunshots by sound) forges evidence for US police [3]. All convictions based on Shot Spotter evidence should be declared mistrials.

BitsNBites has an interesting article on the “fundamental flaws” of SIMD (Single Instruction Multiple Data) [4].

The Daily Dot has a disturbing article anbout the possible future of the QAnon movement [5]. Let’s hope they become too busy fighting each other to hurt many innocent people.

Ben Taylor wrote an interesting blog post suggesting that Web Assembly should be a default binary target [6]. I don’t support that idea but I think that considering it is useful. Web assembly could be used more for non-web things and it would be a better option than Node.js for some things. There are also some interesting corner cases like games, Minecraft was written in Java and there’s no reason that Web Assembly couldn’t do the same things.

Vice has an interesting article about the Phantom encrypted phone service that ran on Blackberry handsets [7]. Australia really needs legislation based on the US RICO law!

Vice has an interesting article about an encrypted phone company run by drug dealers [8]. Apparently after making an encrypted phone system for their own use they decided to sell it to others and made millions of dollars. They could have run a successful legal business.

Salon has an insightful interview with Michael Petersen about his research on fake news and people who share it because they need chaos [9]. Apparently low status people who are status seeking are a main contributor to this, they share fake news knowingly to spread chaos. A society with less inequality would have less problems with fake news.

Salon has another insightful interview with Michael Petersen, about is later research on fake news as an evolutionary strategy [10]. People knowingly share fake news to mobilise their supporters and to signal allegiance to their group. The more bizarre the beliefs are the more strongly they signal allegiance. If an opposing group has a belief then they can show support for their group by having the opposite belief (EG by opposing vaccination if the other political side supports doctors). He also suggests that lying can be a way of establishing dominance, the more honest people are opposed by a lie the more dominant the liar may seem.

Vice has an amusing article about how police took over the Encrochat encrypted phone network that was mostly used by criminals [11]. It’s amusing to read of criminals getting taken down like this. It’s also interesting to note that the authorities messed up by breaking the wipe facility which alerted the criminals that their security was compromised. The investigation could have continued for longer if they hadn’t changed the functionality of compromised phones. A later vice article mentioned that the malware installed on Encrochat devices recorded MAC addresses of Wifi access points which was used to locate the phones even though they had the GPS hardware removed.

Cory Doctorow wrote an insightful article for Locus about the insufficient necessity of interoperability [12]. The problem if monopolies is not just an inability to interoperate with other services or leave it’s losing control over your life. A few cartel participants interoperating will be able to do all the bad things to us tha a single monopolist could do.

Worse Than FailureCodeSOD: By Any Other Name

One of the biggest challenges in working with financial data is sanitizing the data into a canonical form. Oh, all the numeric bits are almost always going to be accurate, but when pulling data from multiple systems, is the name "John Doe", "John Q. Doe", "J. Doe"? What do we do when one system uses their mailing address and another uses their actual street address, which might use different municipality names? Or in all the other ways that important identifying information might have different representations.

This is Andres' job. Pull in data from multiple sources, massage it into some meaningful and consistent representation, and then hand it off to analysts who do some capitalism with it. Sometimes, though, it's not the data that needs to be cleaned up- it's the code. A previous developer provided this Visual Basic for Applications method for extracting first names:

Function getFirstnames(Name) Dim temp As String Dim parts Dim i As Long parts = Split(Trim(Name), " ", , vbTextCompare) 'For i = LBound(parts) To UBound(parts) For i = UBound(parts) To UBound(parts) temp = parts(i) temp = Replace(Trim(Name), temp, "") Exit For Next i getFirstnames = Trim(temp) End Function

Setting aside the falsehoods programmers believe about names, this is… uh… one way to accomplish the goal.

We start by splitting the string on spaces. Then we want to loop across it… sort of.

Commented out is a line that would be a more conventional loop. Note the use of LBound because VBA and older versions of Visual Basic allow you to use any starting index, so you can't assume the lower-bound is zero. This line would effectively loop across the array, if it were active.

Instead, we loop from UBound to UBound. That guarantees a one iteration loop, which opens a thorny philosophical question: if your loop body will only ever execute once, did you really write a loop?

Regardless, we'll take parts(i), the last element in the array, and chuck it into a temp variable. And then, we'll replace that value in the original string with an empty string. Then, just to be sure our loop which never loops never loops, we Exit For.

So, instead of getting the "first names", this might be better described as "stripping the surname". Except, and I know I said we were going to set aside the falsehoods programmers believe about names, the last name in someone's name isn't always their surname. Some cultures reverse the order. Spanish tradition gives everyone two surnames, from both parents, so "José Martinez Arbó" should be shortened to just "José", if our goal is the first name.

But there's still a more subtle bug in this, because it uses Replace. So, if the last name happens to be a substring of the other names, "Liam Alistair Li" would get turned into "am Astair", which is a potentially funny nickname, but I don't think a financial company should be on a nickname basis with their clients.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


David BrinDavid Brin's Annual Summer News Update! So many books & projects.

David Brin’s annual-summer update. This year lots of books & future stuff…
FIRST a big re-issue of eight classics with great new covers, new author introductions, and newly re-edited material... starting with The Postman and The Practice Effect

Plus I've recently re-released newly edited versions all of my Uplift novels, starting with  Sundiver, followed by Hugo winners Startide Rising and The Uplift War, as well as the Uplift Storm Trilogy, starting with Brightness Reef, and on to Infinity's Shore and Heaven's Reach.  All with corrections and new introductions and better timelines... and great covers!

Best of Brin Short Stories: Subterranean Press is releasing a new collection, “The Best of David Brin," an anthology of my very best short stories as well as some new ones - as well as a play I've written! Now a gorgeous collectable hardcover (beautiful cover art by Patrick Farley). Here, you'll find tales from creepy to inspiring to thought-provoking -- and fun. Certainly some of my best writing. Sample stories free on my website!  

(Also available... especially if you are on a budget... my earlier collections Insistence of Vision and The River of Time.)

== New YA series ==

And now, shifting gears... I'm releasing two great Science Fiction series for teens and young adult readers looking for something different... and yes, for those of you with a youthful yen for adventure!

Colony HighAliens grab a California high school and relocate it on an alien planet -- and then come to regret it! In the spirit of Heinlein's Tunnel in the Sky. Winner of the Hal Clement Award for teen readers! Now expanded with two new sequels, soon to be released from Ring of Fire Press, starting with Castaways of New Mojave co-written with Jeff Carlson (cover art by Patrick Farley), and another upcoming sequel co-written with Steve Ruskin. Sample chapters of this series on my website!  (And there's a full, 3-season TV treatment.)


The Out of Time seriesIf the future asked for help, would you go? A 24th Century utopia has no war, disease, injustice or crime... and no heroes! They reach back in time for some.. but only the young are able to go. Maybe you?

Adventure novels by Nebula winners Nancy Kress, Sheila Finch etc. plus great newly added novels for that teen, pre-teen or young-soul -- starting with The Archimedes Gambit by Patrick Freivald -- soon to be released. Next up: Storm's Eye by October K. Santerelli. Stay tuned for updates.

And for a pandemic era. Sci fi that’s too pertinent? My Hugo-nominated short story, “The Giving Plague” explores our complex relations with viruses. Free access on my website or download this story free on Kindle. Also included in my collection, Otherness.

== Provocative nonfiction... and comedy! ==

Vivid Tomorrows: Science Fiction and Hollywood
Explore our love of far-out cinema, and how sci fi flicks may have saved us all. Wish your favorite directors would notice their repeated clichés? Pass 'em a copy! 

Here I offer chapters on 2001, The Matrix, Dune, LOTR, Ayn Rand, King Kong, zombies...and of course, Star Wars and Star Trek. 

Polemical Judo: Memes for Our Political Knife-Fight  Here you'll find political insights you’ve not seen before. 100+ practical tactics off the hoary left-right axis, to defend our Great Experiment - and forge a better future. Sample chapters available on my website.

And how about trying some comedy... in these trying times?

Feed your under-nourished guffaw-neurons! Comedy is hard! Some say that The Ancient Ones: A Space Comedy - lifted spirits in a particularly challenging year. 

Others say “Brin’s nuts!” 

Try sample chapters and decide. 
"Life... death.. and the living dead... will never be the same."

== A screenplay, a stage play and more ==

I've written a screenplay based on my novella "The Tumbledowns of Cleopatra Abyss." It offers vivid action and unusual visuals deep.... under the oceans of Venus. (The novella is also included in my Best of Brin collection!) And yes, show the script to directors! Know any? ;-)

And a stage play!

A smart-aleck argues with the devil. Yeah that old trope, redone from a distinctly modernist perspective. My play, "The Escape: A Confrontation in Four Acts" is wry, pointed… and a bit intellectual. Perfect for your table-reading or local theater group. (Contact me if interested!)

On the side....take a look at Speeches and consulting: See over 300 NGOs, companies, activist groups and agencies I’ve consulted or given keynotes about a future that slams into us with wave after wave of onrushing change.

== And a novel from another Brin! ==

The Melody of Memory. Cheryl’s first novel - a moving tale of growing up while overcoming tragedy on a colony world that seems cursed, doomed to forget... and repeat the mistakes of the past. 

Terrific opening line! 

Sample the first chapters or see the compelling video preview of her wonderful novel.

== Cool watchables! ==

Brin on Artificial Intelligence (A.I.)  
Will humanity diversify?  
The plague of getting ‘mad as hell.’  
Shall we lift the Earth? 
The fabulous-fun trailer for Existence!
Others range from SETI to ESP to colonizing the galaxy.

And I read (for you) the prologue opening of EARTH. It seems to have been written for today.

== And maybe most important... ==

Sci-fi-nerds save us all? 

Picture any weird event. Starships fulla inky squids. Trees walk. Cyber-newborns talk! Panels advise governments.* Might 80 years of thoughtful SF tales prove useful? 

Volunteer programmers** are building TASAT - There’s a Story About That - to rapidly appraise what-if scenarios and maybe someday save us all. Subscribe. Join the Group Mind! 

*I'm on some.       ** Volunteers... like you?

Podcasts, blogs, interviews, YouTubes & social media.

Hot topics. Astrophysics, transparency/privacy, SETI, UFOs, innovations in spaceflight, history... And yeah, cool science fiction.

Want more? News and views at the always-provocative Contrary Brin.

Or see me on Twitter / Facebook

The David Brin site offers free stories, samples, favorite books, videos, nonfiction, ideas and more. As well as advice for new writers!

Finally: what a Sci-Fi-Twilight-Zonish year! But an ambitious, worthy future is possible! 

Do your part. Make it so.

David BrinSpace & science! (starting with a wee bit of pertinent theology?)

Before getting to news from and about Space and the Universe(!)… how about a marginally-related overlap of biology, current events and… theology?  This from Leviticus 13:45 – 46:


 “Anyone with such a defiling disease must wear torn clothes, let their hair be unkempt, cover the lower part of their face and cry out, ‘Unclean! Unclean!’ As long as they have the disease they remain unclean. They must live alone; they must live outside the camp.”


Cover the lower part of their face and social distancing? Alas, Leviticus is only for citing the parts you like at the moment. 

More generally/cogently, here's my talk about dozens of biblical riffs you might use to ease your cousins out of the dark corners that their parasite preachers have painted them into, including the War on Science. So You Want to Make Gods... one of my best speeches. Entertaining, funny (if I do say so) and a classic of contrarianism!

== Starship’s ‘mundane’ or Earthly use interests the Air Force ==

Recent ‘justification’ documents suggest Air Force officials are intrigued by the possibility of launching 100 tons of cargo from the United States and having the ability to land it anywhere in the world about an hour later.  The described capability – of course – can only be approached by SpaceX. Accordingly, the Air Force science and technology investments will include "novel loadmaster designs to quickly load/unload a rocket, rapid launch capabilities from unusual sites, characterization of potential landing surfaces and approaches to rapidly improve those surfaces, adversary detectability, new novel trajectories, and an S&T investigation of the potential ability to air drop a payload after reentry," the document states.

Available for Kabul? One could dream.

Mysterious Venus was the first planet NASA explored, in the groundbreaking Mariner 2 mission that flew by in 1962, breaking our hearts with news that there were no jungles or oceans of SF fame. Now our hot twin world will get two NASA missions with some concepts we first funded at NIAC. As for those oceans? Well, might a million comet-falls remake them? See my novella "The Tumbledowns of Cleopatra Abyss" (now also a cool screenplay!) in my new Best-of-Brin collection! 

Says NASA space biology researcher Chris McKay, the clouds of Venus hold far too little water to support any kind of life we now imagine, but – "Jupiter looks much more optimistic," McKay said. "There is at least a layer in the clouds of Jupiter where the water requirements are met. It doesn't mean that there is life, it just means that with respect to water, it would be OK." High levels of ultraviolet radiation or lack of nutrients could, however, prevent that potential life from thriving, the researchers said, and completely new measurements would be needed to find whether it actually could be there or not. 

== Both totally tubular AND globular? ==

Globular clusters are often considered 'fossils' of the early Universe. They're very dense and spherical, typically containing roughly 100,000 to 1 million very old stars; some, like NGC 6397, are nearly as old as the Universe itself.

In any globular cluster, all its stars formed at the same time, from the same cloud of gas. The Milky Way has around 150 known globular clusters; these objects are excellent tools for studying, for example, the history of the Universe, or the dark matter content of the galaxies they orbit.

But there's another type of star group that is gaining more attention - tidal streams, long rivers of stars that stretch across the sky. Previously, these had been difficult to identify, but with the Gaia space observatory… "We do not know how these streams form, but one idea is that they are disrupted star clusters." The Palomar 5 stream appears unique in that it has both a very wide, loose distribution of stars and a long tidal stream, spanning more than 20 degrees of the sky… 

... populations of black holes could exist in the central regions of globular clusters, and since gravitational interactions with black holes are known to send stars careening away, the scientists included black holes in some of their simulations.  sims suggest  more than 20 percent of the total cluster mass is made up of black holes," 

"They each have a mass of about 20 times the mass of the Sun, and they formed in supernova explosions at the end of the lives of massive stars, when the cluster was still very young."  In around a billion years, the team's simulations showed, the cluster will dissolve completely. Just before this happens, what remains of the cluster will consist entirely of black holes, orbiting the galactic center. This suggests that Palomar 5 is not unique, after all - it will dissolve completely into a stellar stream, just like others that we have discovered."

Oh, and then...

Cosmic filaments are huge bridges of galaxies and dark matter that connect clusters of galaxies to each other. They funnel galaxies towards and into large clusters that sit at their ends.” Hundreds of millions of light years long, but just a few million light years in diameter, these fantastic tendrils of matter rotate, a degree of angular momentum never before seen, on a truly cosmic scale.  “On these scales the galaxies within them are themselves just specs of dust. They move on helixes or corkscrew like orbits, circling around the middle of the filament while traveling along it.”

 It’s been supposed that there is no primordial rotation in the early universe. As such any rotation must be generated as structures form.

== Getting competitive up there? ==

Apparently China is further along in developing reusable rockets than many of us thought.  “China conducted a clandestine first test flight of a reusable suborbital vehicle as a part of its development of a reusable space transportation system. The vehicle launched from the Jiuquan Satellite Launch Center and landed at an airport just over 800 kilometers away at Alxa League in Inner Mongolia Autonomous Region.”


A burgeoning boom in venture capital and SPAC investment in space-related startups.


Peering inside Mars - an excellent WIRED article updates what has been learned about the Martian interior by the Insight seismic lander.


Long-predicted as the source of type 1a supernovae, a teardrop-shaped star has been found, caused by a massive nearby white dwarf distorting the star with its intense gravity, which will also be the catalyst for an eventual supernova that will consume both.” As soon as the dwarf has stolen just enough to surpass the Chandrasekhar Limit.  And since all such events have exactly the same mass-trigger, supernovas from such star systems can be used as ‘standard candles’ to measure expansion of the universe. HD265435 is located roughly 1,500 light years away, so don’t lose sleep. Over this, at least. But close enough to put on quite a show.  (Alas, this article has a couple of boner paragraphs.) 


The alternative, a supernova created by a sudden stellar merger… is not quite as ‘standard” as a pure type 1a.


== More space! More space! ==


For the first time, a NASA grant has gone to a joint team of astronomers plus the Breakthrough Listen Project to sift data from the TESS planet hunting mission that might (maybe) indicate alien mega structures. Or else big, natural light-blockers like comets. ‘If alien megastructures exist in our galaxy, there’s a decent chance that they might be hiding in the TESS data. But there’s also the possibility that the Breakthrough Listen team will come up empty-handed just like every SETI search before them.’ 


The Vasimir electric propulsion engine, ready for prime time, at last?


Caltech is announcing that Donald Bren donated over $100 million to form the Space-based Solar Power Project (SSPP), capable of generating solar power in space and beaming it back to Earth. The donation was made anonymously in 2013, but nears a significant milestone: a test launch of multifunctional technology-demonstrator prototypes that collect sunlight and convert it to electrical energy, transfer energy wirelessly in free-space using radio frequency (RF) electrical power, and deploy ultralight structures that will be used to integrate them. SPP aims to ultimately produce a global supply of affordable, renewable, clean energy. 


SSPP aims to ultimately produce a global supply of affordable, renewable, clean energy. The project's first test, in 2023, will launch prototypes solar power generators and RF wireless power transfer, and includes a deployable structure measuring roughly 6 feet by 6 feet.


Sam VargheseAustralia is a vassal state of the US. That will never change

The craven manner in which Australia continues to bow before the US is borne of a deep-seated fear that Washington will again choose to interfere in Australian politics as it did in 1975.

That year, the late Gough Whitlam, who was prime minister, hinted that he might have second thoughts about renewing a lease for Pine Gap, a base in Australia’s northern parts which the Americans use for spying on other countries.

Whitlam was sacked by the governor-general John Kerr shortly thereafter. A full account of the affair is here; the CIA’s involvement has never been in doubt.

The mess that has resulted in Afghanistan has shown that Australia should be wary of getting involved in American military adventures because they always end in tears. But Australia never learns; there is much talk of the Anzus treaty between the two countries whenever anyone raises doubts about American intentions, even though this treaty only requires the two nations to consult each other were either to be threatened by an adversary.

On at least two occasions, Australia has sought American help but has been snubbed. In the 1960s, President John F. Kennedy told then prime minister Harold Holt that the US would not go to war with Indonesia to support Australian and British troops in Malaysia.

And in September 1999, with East Timor in turmoil, President Bill Clinton told Prime Minister John Howard that the US wouldn’t supply any combat troops for the international stabilisation force, INTERFET.

But Australia, lapdog that it is, has continued running behind the US to prostrate itself whenever possible. After the attacks of September 11, 2001, the US never asked Australia for troops; Howard decided to offer them, leading to this country’s long involvement in the messy Afghanistan adventure.

Howard has form in this regard; in the early 2000s, it was he who ran behind George W. Bush seeking a free trade agreement, apparently in the belief that he could get a good deal because he considered the US president a “mate”.

Bush, however, had no such illusions; for him, the priority was ensuring that the constituencies which traditionally vote Republican were not affected by the treaty. Thus, when Howard pleaded for an extra export quota of 100,000 tonnes of beef, he was flatly turned down.

And the current statistics for US-Australia trade show clearly who is the beneficiary: in 2018-19 official figures show Australia’s imports of American goods and services totalled $51.638 billion.

What Australia managed to export to the US was less than half that: $24.748 billion.

Australians love to think of themselves as being very important in the US worldview. The truth is that this country is just another state that the Americans exploit to prop up their position in the world.


Cryptogram Excellent Write-up of the SolarWinds Security Breach

Robert Chesney wrote up the Solar Winds story as a case study, and it’s a really good summary.


LongNowA Global History of Trade, As Told Through Peppers

A large number of peppers
Photo by Nick Fewings on Unsplash

A new study in the Proceedings of the National Academy of Sciences provides an enlightening window into the history of global trade and human population movement through a perhaps surprising source: pepper genetics. The study bases its findings on a dataset of over 10,000 pepper (C. annuum) genomes collected from gene banks the world over. A research team led by Dr. Pasquale Tripodi of the Council for Agricultural Research and Economics (CREA) in Italy devised a novel method to compare relative genotypic overlaps, or RGOs, between pepper samples from different regions.

The study’s method for comparing the genetic makeup of different regional pepper samples allowed for measurement of regional uniqueness and hypothetical patterns of trade. Source: PNAS

The study’s findings show that peppers — one species with a range of cultivars from mild bell peppers to exceptionally spicy bird’s eye chili peppers —  have for centuries been a hotly traded commodity on the global market. While culinary historians have long held that peppers were brought to Europe and then the Middle East, Africa, and Asia as part of the colonization of their original home in the Americas, the lack of solid trade records at many points along this network previously left much of their route ambiguous. The study provides a plausible, genetics-driven history of pepper trade, showcasing a complex, multi-directional pattern of spread over the course of five centuries of global commerce. 

Learn More:

  • Read the original study in the Proceedings of the National Academy of Sciences
  • Watch Lewis Dartnell’s 02019 Long Now Talk that explores how environmental forces shaped trade and the cultivation of the first crops. 


LongNowLetters to the Future Uses Plastic Waste To Send Lasting Messages

3 bound copies of the Letters to the Future Project
3 bound copies of the Letters to the Future Project. Source: Letters to the future

In our efforts to foster long-term thinking and preservation, we at Long Now do not typically think of single use plastic as an ally. Yet that’s precisely what the non-profit art project Letters to the future does, harnessing plastic’s lack of biodegradability to make a point about what we as a society leave behind not just to our children and grandchildren, but our great-great-great grandchildren as well.

A person standing over some sheets of recycled plastic, preparing it for processing.
A person standing over some sheets of recycled plastic, preparing it for processing. Source: Letters to the future

Letters to the future takes plastic collected from the streets of Vietnam and uses it as the paper for a series of over 300 letters written from all over the world, addressed to the writers’ descendants five generations hence. 

A closeup of one of the letters contained within the project, including text in Arabic script and in English.
A closeup of one of the letters contained within the project, including text in Arabic script and in English. Source: Letters to the future

The project was conceived of by Vietnamese creative agency Ki Saigon and funded by Vietnam-based pizza restaurant chain 4P’s to commemorate the 10th anniversary of their founding. 

Learn More

  • Check out the Letters to the future website for more insight into their vision and process.
  • Watch Susan Freinkel’s 02012 Long Now Talk on how to get the benefits of plastic’s amazing durability while reducing the harm from its convenient disposability.


David BrinWoke Media vs Bill Maher. A tiff that only serves the KGB-confederates.

First, a reminder to all Californians... to vote. Mail that ballot in! I assume, since you are visiting Contrary Brin, that you're sane enough to vote: "No." 

Only now, since I've gone several weeks without posting anything provovcative or controversial, let's see what a storm I can provoke.

== Bill Maher, sellout or standard bearer for the Roosevelteans? ==

Well, it's an ecological niche that only Bill Maher occupies, alas. 

While I like and enjoy many of the late-night liberal show hosts - especially fellow sci fi nerd Stephen Colbert, who gave me a cap! - I do wish more of them will draw lines in the sand and admit: 

"Yes, there does exist some crazy among the good guys, too. And that crazy sometimes hurts the cause of justice and progress, more than it helps." 

Before you scream in outrage and/or wander off in disgust, let me reiterate what you see here frequently, that our greatest danger is from an insane and treasonous Mad Right - a risen Confederate lunacy - that is waging open war - at the behest of oligarchs, mafiosi and Kremlin agents - against not only justice and tolerance but vs. every single modern fact-using profession. That last aspect gets underplayed in liberal media, but it is the core hate-and vendetta of the oligarchy. Especially Fox News.

Having said that, I assert also that you would be nuts not to grudgingly admit that our side has some flaws, including a passl of real nut jobs and bullies. With this major distinction!

* Yes, the FAR left CONTAINS some fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality and symbol-fetishes on you. 


* But today’s ENTIRE mad right CONSISTS of fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality and symbol-fetishes on you.   


There is all the world’s difference between FAR and ENTIRE.  As there is between CONTAINS and CONSISTS. 

So sure. We must spend 99% of our current attention reinforcing humanity's sole hope - a rationally-tolerant and self-critically improving enlightenment experiment, fighting for its very survival against Putin-Murdoch and their hirelings! 

And yet, our Union side of this desperate latest phase of the 250 year American Civil War is harmed by the far-fewer but still horrible nut-job bullies on our own side. Splitters who helped make today's demographically-challenged GOP the force that it is and remains. 

== More voices needed ==

I must cite especially, Maher's recent riff about "cultural appropriation," a far-left guilt trip fetish that has no redeeming qualities. It's not an exaggeration or conflation of something good... it's just simple, flat out insanity.

Oh, I'll grant that original sources should be credited and acknowledged! Like Greece always leading the parade of nations at the Olympics. 

Sure. If any Hawaiians show up at a surfing competition, they oughta get - in perpetuity(!) - the right to go first! And first-pick naming rights on anything in the entire universe that's discovered by mighty observatories now using the one, great and unambiguously miraculous gift of Poli'Ahu, the crystal clear skies on Mauna Kea. 

And if there's a Black musician or Jew in a Jazz band, they get to choose the order of their instrument solo.... fine. And for the record, real theft, like native lands, should get major redress! But that's not the purpose of "cultural appropriation!" Which is sanctimoniously chemical. 

Elsewhere I discuss the worst addiction and most harmful one in the modern age... self-righteous indignation and sanctimony, which poisons every political extreme, especially the extreme that has completely taken over the U.S. mad Right... 

...but that also fluxes across elements of a farthest-left that - while wholly justified to impatiently demand progress - rejects any notion that they are beneficiaries of generations of vigorous but pragmatic reformers who had (and needed) much thicker skins and suffered far worse indignities in order to open doors for today's "trigger-warning" activists.

The indignity of proclaiming that one is fragile - dealt crippling wounds by the slightest error of wording - is one that Frederick Douglass and Harriet Tubman and Robert Smalls and Mohandas Gandhi and Rosa Parks and MLK and Eldridge Cleaver and Malcolm X would have found puzzling, if not bizarre, even pathetic.

 The dissing and shit-hurling at ALLIES (as some will hereupon hurl at me) is less-surprising (read Orwell's Homage to Catalonia) but just as impractical, unhelpful and spectacularly self-indulgent.

== Loudness doesn't make you the leader of liberalism ==

In an article - The Specter of Illiberal Anti-Racism - by Nathan Gardels, Noema Magazine considers work by a person who takes this counter-argument farther than I would.  Into Bill Maher territory. But this passage is one to share:

"Fortunately, sober voices with irreproachable anti-racist credentials such as Barack Obama — “anti-anti-racists” in Torpey’s phrase — are calling out the extremists. Torpey cites Obama saying in 2019 that “This idea of purity and you’re never compromised and you’re always politically ‘woke’ and all that stuff … you should get over that quickly. … The world is messy; there are ambiguities. People who do really good stuff have flaws.” Obama, notes. Torpey, went on to say that “among certain young people, and this is accelerated by social media, there is this sense sometimes of: ‘The way of me making change is to be as judgmental as possible about other people, and that’s enough.’ … That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far."

On Bill Maher's show, he talked about "progressophobia", a term he claimed Steven Pinker coined to describe "a brain disorder which strikes far-liberals and makes them incapable of recognizing progress." Not a new observation for anyone who has spent time here. 

The punch line: "It's like situational blindness, only the thing that you can't see is that your dorm in 2021 is better than the south before the Civil War."

And yes, so much as mentioning Maher’s name makes me eeeeevil!  This despite the fact that he (and I!) have done more for progress in any month than most of those fuming at me right now have, across their entire lives. 

Alas that he and Barack Obama - along with Bernie Sanders and Elizabeth Warren and AOC (see below) - seem to be among the last voices for the most effective reform movement the US ever saw… Rooseveltism, that finally got progress in gear after beating back the oligarchs, invigorating labor unions, crushing fascist monsters and beginning a long, long, too-long grinding climb out of Jim Crow and the chasms that seem built-into human nature. 

Refusing to see how powerful that coalition was has led the left to make a series of mistakes that drove working whites into the arms of confederates, leading to Reaganism and our long slide. 

== Polling Americans ==

The Pew Research Center, which does some of the country’s best polls, classifies all Americans as being in one of nine different political groups. The categories range from “core conservatives” on the right to “solid liberals” on the left, with a mix of more complicated groups in the middle. While I am suspicious of many categorization attempts, this one makes a hugely important point, that on average most US racial minorities -- blacks, hispanics etc -- tend to be liberal of course, but also highly skeptical of the most-woke or leftish component.

"Much of the recent political energy in the Democratic Party has come from solid liberals. They are active on social media and in protest movements like the anti-Trump resistance. They played major roles in the presidential campaigns of Bernie Sanders and Elizabeth Warren, as well as the rise of “The Squad,” the six proudly progressive House members who include Alexandria Ocasio-Cortez.

"All six of those House members, notably, are people of color, as are many prominent progressive activists. That has fed a perception among some Democrats that the party’s left flank is disproportionately Black, Hispanic and Asian American.

"But the opposite is true, as the Pew data makes clear.

"Black, Hispanic and Asian American voters are to the right of white Democrats on many issues. Many voters of color are skeptical of immigration and free trade. They favor border security, as well as some abortion restrictions. They are worried about crime and oppose cuts to police funding. They are religious."

This is not to undermine "solid" liberals devotion to wokedness. You be you! And we'll get nowhere without conscience-prodding! 

And yes, as an old fart I know I must adapt to shifts in vocabulary and get more woke in some ways!  My kids, in their 20s, make sure of that!

Still, the fact that the very minorities that PC foks call clients sometimes sniff dubiously does help explain some electoral disappointments. And if you want these groups to be fully mobilized in 2022, it might be best NOT to assume all folks of color are clones of the most-visible activists. 

We need to be flexible enough to negotiate our coalition's best tactics... I tried to point out in Polemical Judo.

== Oh, one last point about AOC ==

Alexandria Ocasio-Cortez is rightfully hailed as one of the savviest politicians of a new generation. But many of her fans don't know the half of it. 

Look more carefully! She is playing her role as a left-ish gadfly working with Liz and Bernie to keep the DP's Overton window from conceding too much. But she is not at all like the rest of her 'squad' in many ways. She very clearly wants Joe Biden to succeed, for example. And, like Bernie, her goal is re-establishment of the Greatest Generation's Rooseveltean social contract.

Watch what'll happen in 2022, when some of you will start trawling around for excuses to denounce Biden as "corporatist" ruled by Democrat-lite sellouts, as you go prepping to sit on your hands or otherwise betray us... the way the left betrayed us in 1980, in 1994, in 2000, in 2010 and 2016. 

Just watch how AOC and Stacey Abrams and Jamie Harrison and Bernie will come after you. With a stick.


David BrinReaching for the Heavens - and expanding our vision

I won't mock them, or get involved in their reciprocal tiffs, because for all their faults, they are at least doing what Republicans promised (lying) that all of the rich would do with massive tax cuts. That is, investing in new capabilities. (Maybe Supply Side woulda worked... if the other 99.99% of rich folks did that, as promised.)

Well. While Branson and Bezos do their suborbital jaunts, SpaceX plans to send half a dozen civilians – with no professional astronauts aboard – into orbit possibly as early as mid September. 2021. This is making my novel Existence darned prophetic about zillionaires in space. (Watch the video trailer!) Apparently, this crew Dragon capsule will have a cupola window and toilet combo (!!) where the docking hatch would usually go. Yipe. A loo with a view.

A MAGA Congressman, Louie Gohmert, recently suggested that the US Bureau of Land Management (BLM) attempt to solve global warming by altering the Earth’s orbit. While Gohmert’s sarcastic dig was just another salvo in the mad-right’s all-out war on science, it did get some folks re-evaluating ways to supplement carbon-emission-reduction, e.g. with sunshades or geoengineering. Or else… sure, why not talk about what it would take to alter Earth’s orbital radius outward by about 3 million kilometers, enough to reduce ambient temperatures by about 3 degrees? (Across the next billion years 'we’ will have to do it several times, as the sun grows hotter. And by 'we' I mean our vastly-better heirs who are reading this 'now,' in the year 35640c.e.. and not you, my fellow ancestor sims.)

Could we move the Earth? In a Scientific American essay, Maddie Bender started by offering a cogent appraisal of the energy transfers needed – only about a billion billion times the annual energy use of today’s human civilization. Which makes the whole thing seem… not-so-ridiculous! Given that we need to use other, much quicker methods (like carbon-pollution reduction) in the short term, it is suddenly actually conceivable that advanced descendants might deal with the long term warming by gradual orbit-altering over tens of millions of years. 

Alas, the author then goes on to describe methods for doing this orbital velocity augmentation, by listing nothing but absurd non-starters, like the jibbering-loony notion of flying massive objects past the Earth-Moon system over and over, millions of times, without ever suffering an ‘oopsie” accident.'

As some of you know, I have offered a much better way for a future advanced civilization to do this with utter safety – if patiently – across the requisite time scales. See my video: Let's Lift the Earth! Spaceflight-explanation-maven Scott Manley even referred to the method, recently. (Perhaps someone will tell Scientific American or @MaddieOBender.)  

Meanwhile though, let’s stop with the carbon poisoning, eh? And science-hating meme-poisoning, too? Retiring crazy-moronic traitors like Gohmert to sipping their mint juleps on a virtual veranda, while the nerds they hate save the world for them.

== News from Beyond ==

A large asteroid… or up-size comet… that’s almost big enough to call a  minor planet is about to make its closest pass to the Sun on its 600,000-year highly-eccentric orbit, whose perihelion will come (apparently) within 11 au in 2031.  If it is cometary in makeup (ref. my doctoral thesis) then it may put on quite a show and it’s certainly a good candidate for a flyby mission.

Alas, its passage through the ecliptic, a bit later, will apparently be in August 2033. And that’s NOT good news… not for real world reasons but because we can expect a maelstrom of insanity that year, around the time of the 2000th Easter, as I describe here.

Mark Buchanan has an article in WaPo offering an argument I've also made about the foolishness and utter irresponsibility of those attempting METI "Messaging to Aliens." My own missive about this, refuting every METI argument in much more detail, including the "I Love Lucy" falsehood, is “Shouting At the Cosmos” – about METI “messaging” to aliens - and can be found on my website.

== And more fun sci-stuff! ==

In detections that came back to back, just 10 days apart, in January 2020, gravity wave detectors revealed events happening a billion years in the past, when black holes ate neutron stars. One had a mass nine times bigger than our sun and gulped a neutron star with about two times our sun's mass. The other black hole had about six times the mass of the sun and ate up a neutron star with 1.5 times the sun's mass. One member of the LIGO team calculates that a black hole eats a neutron star roughly every 30 seconds somewhere in the whole observable universe, though scientists would have to be looking in the right place with the right kind of equipment to detect it. Within 1 billion light-years of Earth, it happens roughly once per month.

A few weeks ago I linked to the new, better images of the M87 black hole showing powerful polarization effects. Now rapid simulation work suggests that the MAD theory of tightly rotating magnetic fields may explain the super-tight jets that spew north and south from many black holes. Wow, what an age we live in.

And further proof of amazing times. From lab experiments, measuring momentum effects in tritium beta decay we have an upper bound on the mass of the electron neutrino at about 1.1ev and from "astronomical oscillation data" a lower bound of 0.5ev... a narrow and narrowing gap. The presenter of a talk I saw yesterday suggests - and she was persuasive - that the number density across the universe is about 330 neutrinos per cubic centimeter.

AND I just read that they are studying the microbiome of Southwest Pueblo Native Americans from 1000 years ago to recover lost symbionts. (Might one turn into Larry Niven's "booster spice"?)    Again amazing times.

== You want more science. Here's more! ==

 A newfound tiny white dwarf, named ZTF J1901+1458, just 130 light years away, is immensely massive, just shy of Chandrasekhar’s Limit where the mass triggers a type 1a supernova, probably having formed when a binary pair of stars – each having become a white dwarf separately, whirled, emitted gravity waves, and slowed into a merger that created incredible angular momentum (rotation) and magnetic fields. (How’s that for a sentence?) Now this. It would take very little added mass to tip this thing into a too-close-for-comfort 'pop'. And did I mention it's just 130 l.y. away? Sci fi folks take note. 

Through a long standing partnership with World Book, Inc., NIAC is proud to announce the second eight books in the Out of This World 2 series! This award winning STEM book series is geared towards kids with a 4th – 8th grade reading level so they can explore the next frontiers of space through the lives and work of researchers 3ho competed for seed grants from NASA's Innovative & Advanced Concepts program - (NIAC) (I am on the Advisory Council.) The 2021 NIAC Symposium will be held  Sept 20-24 via Livestream here: 

Discover second skin space suits, leaping bots, expandable space habitats, solar surfing probes, clockwork rovers, and more in this easy-to-read series about complex space science topics. Stunning imagery and interactive activities will entice and challenge readers of all ages. The Out of This World 2 series features early stage NASA projects that hope to develop bold new advances in space technology. Contact World Book directly, or your local library/bookstore to find out more.


David BrinExplaining what should be obvious about ‘transparency' yet again

FIRST a brief note for SF fans with e-readers!  Startide Rising, my Hugo and Nebula award winning novel, is on discount today.

Get hooked(!) only today with this ebook bargain at $2.99! Refreshed-updated with new cover & introduction and Uplift Universe timeline!

== The Transparency thing that gave us everything we value ==

I still give a lot of talks on this topic... though I suspect in some cases I'm invited in hope of using me as a strawman "foe of privacy," to be knocked down.  Some find it disappointing to learn that I fear both Big Brother and loss of privacy, as much as (or more than) they do, pointing out that only one thing ever thwarted tyranny or nosy neighbors, and that's light, the ability -- your ability -- to catch would-be oppressors and denounce their misdeeds.

Very few peoples and nations across 6000 years were ever able to apply this power of reciprocal light and accountability. Indeed, we don't sufficiently appreciate the power it gave citizens. Certainly the world's despots know it, and are desperately creating protected shadows for themselves. And lacking upward transparency (sousveillance), a return to feudalism... or much worse... is inevitable. 

With it, you might prevent Big Brother, only to reap a second layer of fearful oppression... by a judgmental majority who demand conformity, using democracy against those they do not like. The nightmare portrayed crudely by books/movies like The Circle and much better in that Black Mirror episode "Nosedive."

Ironically, this is exactly the scenario - fear of oppression by 'the mob' - that Fox/KGB and their pals are now spreading across the MAGAsphere, in order to discredit democracy itself! They must, because if U.S. citizens truly recover voting sovereignty, there would be no political future for today's Mad Right/putinist version of confederate conservatism.

Still, there is an underlying point that they're exploiting, a fear that Ray Bradbury illustrated in Fahrenheit 451. If transparency is universal, but the culture is immature and judgmental, then you don't get Big Brother. Rather you get lateral oppression by that 51% majority - an oppression that's totally legal, democratic and above-board. Indeed, this is how "social credit" might make crude, Orwellian Gestapo-tactics unncessary in future despotisms, as the People themselves enforce conformity, laterally.

Hence, our narrow path of freedom requires a third ingredient, at the level of values. If transparency is universal AND we have a culture that scorns gossips and bullies and privacy busybodies and voyeurs and judgmental conformity-enforcers, then MYOB can prevail.

What's MYOB?

Mind Your Own Business.

It means if I'm not hurting anyone, then my quirks and eccentricities merit protection, same as yours. Lest a day come when I am not tolerated... followed by you.

Yes, I have learned that this simple idea is almost impossible for most folks to wrap their heads about, even though it's a fundamental, base level zeitgeist of our present society! Indeed, I show how Sci Fi books and films have been at the vanguard in promoting appreciation of individual eccentricity, in my recent nonfiction book: VIVID TOMORROWS: Science Fiction and Hollywood.

Tolerance is actually encouraged best by transparency, especially when gossips and bullies and privacy busybodies and voyeurs are caught in the act... but only when gossip and bullying and conformity-enforcing are in disrepute.

Again, yeah it sounds counter-intuitive. Yet, it is exactly the baseline value system of a majority of westerners now.  It's very likely your baseline value!  And it is the only way we'll get beyond the danger zone to something decent.

== MORE ==

Under a broad program called "signature reduction", it is said (in this article) that the U.S. Pentagon supervises a secret force more than ten times the size of the clandestine elements of the CIA, that “carries out domestic and foreign assignments, both in military uniforms and under civilian cover, in real life and online, sometimes hiding in private businesses and consultancies, some of them household name companies.” This is in part to deal with the difficulties of modern intel gathering when adversaries have massive files and face recognition. Also: “The explosion of Pentagon cyber warfare, moreover, has led to thousands of spies who carry out their day-to-day work in various made-up personas, the very type of nefarious operations the United States decries when Russian and Chinese spies do the same.”

Which raises a basic question. How will we evade this devolving into:

1) A deep state that is unaccountable to constitutional systems*… or

2) The sort of tit for tat conflict of reciprocal sabotage that the great science fiction author Frederik Pohl warned about in his terrifying novel The Cool War.

Neither of these outcomes frighten adversaries of the West, who are fully committed to these tactics in order to prevent the psychic collapse of tyranny that their models predict, if the Enlightenment is not toppled soon. But both outcomes should worry us.

Those currently in our intel/military/law officer corps are likely loyal to constitutionalism and law and all that. Moreover they need tools such as these, in order to wage a desperate struggle on our behalf.  But these methods can all too easily anchor into habit. Habits that prevent our servants from even noticing an alternative suite of weapons that have far better long term prospects. In fact, those alternative methods are the only ones that can lead to our sole victory condition.

Weapons of light.

== ...Philosophy is a walk on the slippery rocks... ==

Finally… Two notes on philosophy.

1) UCSD Professor Benjamin Bratton - author of Revenge of the Real - is fighting for us on a front that seems obscure to 99.999% of us, but is actually important -- the ongoing effort by 'postmodernist' philosophers (especially on French and US campuses) to denounce and discredit science, democracy, so-called 'facts,' and the very concept of objective reality. I recommend his article for those who would blink in amazement over Ben's depiction of the rage-howls that fulminate from elite subjectivity spinners, who demand that their incantations get paramountcy over the evidence and models we laboriously build out of a clay called 'reality.'

2) I had a note from someone else who felt caught between two philosophical “rocks.”

Is there an intersection between ephemerality and nihilism - lately I have found myself fully seated in ephemerality - not as a negative - but - rather - as an awakening of the transitory aspect of my physical engagement with the universe  - it has awakened my creativity - am I way off – thoughts?” 

My answer is the same one I give at commencement addresses:

You can be large. 

More than one thing. 

Study the phrase "positive sum" as opposed to "zero sum."  Sure your individual life may be short to the point of apparent pointlessness. So is a bee's. But the hive does mighty things. 

We are building a civilization of extraordinary magnitude! One that has accomplished prodigious things, perhaps unprecedented across the galaxy, and we did it by standing on the shoulders of ephemerals who stood on the shoulders of ephemerals who clawed their way a little higher, out of a muck of ignorance. These accomplishments - forged in the spirit of our ancestors - are just hints at what might yet come.

You don't have to accomplish great things to be part of all that. Or even remembered! You will know the things you did, to be part of that rising pyramid of shoulders. Feel the weight of future generations on them. 

You have our gratitude.


David BrinEnemies of Democracy... and (worse) traitors to democracy.

First, above and beyond mere politics… if you seriously want to help civilization be resilient against shocks the future might bring, consider (in the U.S.) taking training for CERT – your local Community Emergency Response Team. It’s a really interesting and fun 20 hour course given by your local fire department and you get cool green equipment and a badge! And once a decade maybe some light duties to help keep your family and neighbors safe.  

== Explicit: Haters of Democracy ==

It’s baaack. One of the top propaganda memes used by oligarchs to foment chanting dopes to pour hatred upon…democracy. 

They must!  Everyone can see that the Republican base is in freefall demographic collapse, not just from the growth of cities or the rise of minorities but from defection by their own children, the smarter half of them who continued educating and learning into their twenties. The wave of voter suppression bills across Red America are desperation measures, justified by same-old ancient ravings against ‘mob rule.' Despite the fact that average levels of education and knowledge among Democrats is now much higher than for Republicans. (And I do offer counter-tactics.)

So how do they justify direct and open attacks against democracy itself?  

There is a standard catechism that takes many forms. 

Recently on "PenceNews” there spread a supposed quote from Karl Marx: 

"Democracy is the road to socialism… Socialism leads to communism." 

Hence why voting needs to be restricted. 

Except, of course, that Marx said no such thing! In fact it is an almost direct quote from Adolf Hitler. Way to choose your sources, guys.

Indeed, Marx also despised democracy, but for very different reasons, calling it a "bourgeoise indulgence that beguiles the workers" into imagining they can win justice through peaceful means. 

And yes, in fact, the most vigorous anti-communists have been Democrats and especially the U.S. labor movement, e.g. the AFL-CIO, while Republicans have perpetually oscillated between isolationism and kneeling accommodation with Moscow (except for just the term of Ronald Reagan, a former Democrat). Today they are absolutely kissy-face with "ex" Stalinist Vlad Putin and his mafia of "ex" commissars and the "former" KGB.

The mantra that democracy leads to socialism, then communism, is another version of the "fatal sequence" mantra that has infested the U.S. right for all my life, and before that all the way back to when plantation lords got poor whites to despise the very revolution their forebears had won. The refined version is the "Tytler Calumny," which I dissect in this linked article.

Though MAGAs and other styles of confederate recite it as one of their many masturbatory incantations, they flee the instant you offer a wager over it, or any part of the scenario, ever having happened, even once, in the history of human civilization. (Damn inconvenient facts!). YOU should read and be armed against this magical mantra, which your uncles are doubtless reciting, which justifies treason against the very same democratic republic that gave these ingrates everything.

 As for Marx saying that democracy beguiled workers into thinking they could lead fine, middle class lives... well he was right! FDR and the Greatest Generation, after smashing the Nazis and containing the Stalinists, proceeded to promote an empowered middle class that kept drawing in wave after wave and caste after caste of the formerly underprivileged. Call it 'beguiling' away revolutionary fervor, if you like. 

Or else recognize that Marx simply never imagined a great industrial nation behaving fairly, with decency, flattened wealth disparities and gradually improving justice. But that's exactly what was happening...

...until the empire struck back, stealing from working folks under "Supply Side Theory" and using Fox+KGB agitprop to restart the American civil war. And thus they seem bent on resurrecting Marx's scenario from its well-deserved grave. 

If that zombie (Marxism) is back, shambling across every college campus on Earth, it is your fault, you wretched-stupid inheritance brats and other parasites out there. Wake up and fire your sycophants. Or else learn how to ride a tumbrel.

== The one tactic ==

Alas, no one has the guts or imagination to confront these imbeciles with wager demands, with large escrowed stakes over falsifiable assertions, testable by facts. (And I offer the bets to be judged by panels of retired, senior military officers; watch the yammerers blanch!) When so confronted, they always flee. 

But here are challenges specific to this imbecilic malarkey:

- Name one advanced, liberal democracy, including socialist Scandinavian ones, that ever turned communist. Um, one.

- Show us where Marx said that!

- Vladimir Putin called the fall of the USSR "history's greatest tragedy" and while dropping all the hammer-and-sickle symbols, he installed all former KGB agents and commissars as Russia's new oligarchy, while the renamed KGB uses all the same methods toward the same goal: our downfall, including Trump and blackmailing most of the GOP political caste. 

So who are the commies now? 

Whatever, Ivan.

== A fifth column ==

MAGAsphere is abuzz with the latest lunacy. If the Republicans win back the House in 2022, then choosing Donald Trump as Speaker and 2nd in line for the presidency. Oh, please, make that your platform, GOP?  “As a reminder, by the time Trump left office in January, he had the lowest approval rating of any President "since scientific polling began," at 34%, with a 61% disapproval. Plus, a Quinnipiac poll released after the Jan. 6 attack on the Capitol found that 59% of Americans believe Trump should not be allowed to hold office ever again.” But a majority of the shrinking GOP minority is growing crazier and more frothing rabid by the minute.

== Politiclly redolent Miscellany ==

They did a huge 4-year test in Iceland, with 2,500 workers (1% of the labor force) where they paid the workers the same amount for about 36 hours of work (9 hours a day) just 4 days a week and, surprise!, productivity remained the same or even improved in the majority of workplaces.  The trials led unions to renegotiate working patterns, and now 86% of Iceland's workforce have either moved to shorter hours for the same pay, or will gain the right to, the researchers said.  Of course this means some will have two jobs. Or work on their own startups, or. dive into amateur avocations – some targeting excellence. 

With Pastels and Pedophiles: Inside the Mind of QAnon, Mia Bloom and Sophia Moskalenko track QAnon's leap from the darkest corners of the Internet to a frenzy fed by the COVID-19 pandemic that supercharged conspiracy theories and spurred a fresh wave of Q-inspired violence, showing how a conspiracy theory with its roots in centuries-old anti-Semitic hate has adapted to encompass local grievances and has metastasized around the globe―appealing to a wide range of alienated people who feel that something is not quite right in the world around them. While QAnon claims to hate Hollywood, the book demonstrates how much of Q's mythology is ripped from movie and television plot lines. 

(Note the Protocols of the Elders of Zion was concocted in the very same Kremlin basements that are now foisting QAnon upon us.) 

What most critics fail to point out should be the most obvious thing… that the mad right’s central theme is to accuse their opponents of doing what they do far more. For example, rates of proved pedophilia and predatory sexual perversion among prominent Republican politicians, pundits or backers – proved or blatant – are vastly higher than rates among prominent Democrats.  Provably twice or three times the rate and anecdotally as much as six times! Likewise rates of moral turpitude in red states (excluding Utah) vs. blue states in everything from domestic violence, STDs, teen sex/pregnancy/abortion, gambling, drug use and so on. And especially the confederate right’s close relationship with casino moguls, mafiosi, petro sheiks, inheritance brats and “ex”communist “former” KGB agents.

A top judo move in politics - seldom done - is to show how your opponents hypocritically harm the things they claim to value. Accuse a confed of racism or ignoring climate calamities or the poor or science? He'll shrug you off as a smug-patronizing, free-spending, smartypants nerd-lib who prefers the poor be our 'clients' than empowered capitalists. 

So, instead (or in addition) show how the right is far more wasteful and the worst enemies of the market competition they claim to love! Far worse on deficits. Pals of monopoly and oligarchic market cheating. And traitors to the supply chain health on which the entire economy depends. Who is addressing these national emergencies? Made far worse by GOP neglect & sabotage, these underpinnings of the economy are being saved by Democrats.

Finally, we need constant reminding the axiom on propaganda most often attributed to Joseph Goebbels:  

“If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State.”   

That's from the only book Trump used to keep by his bed.


MELinks July 2021

The News Tribune published an article in 2004 about the “Dove of Oneness”, a mentally ill woman who got thousands of people to believe her crazy ideas about NESARA [1]. In recent time the QANON conspiracy theory has drawn on the NESARA cult and encouraged it’s believers to borrow money and spend it in the belief that all debts will be forgiven (something which was not part of NESARA). The Wikipedia page about NESARA (proposed US legislation that was never considered by the US congress) notes that the second edition of the book about it was titled “Draining the Swamp: The NESARA Story – Monetary and Fiscal Policy Reform“. It seems like the Trump cult has been following that for a long time.

David Brin (best-selling SciFi Author and NASA consultant) wrote an insightful blog post about the “Tytler Calumny” [2], which is the false claim that democracy inevitably fails because poor people vote themselves money. When really the failure is of corrupt rich people subverting the government processes to enrich themselves at the expense of their country. It’s worth reading, and his entire blog is also worth reading.

Cory Doctorow has an insightful article about his own battle with tobacco addiction and the methods that tobacco companies and other horrible organisations use to prevent honest discussion about legislation [3].

Cory Doctorow has an insightful article about “consent theater” which is describes how “consent” in most agreements between corporations and people is a fraud [4]. The new GDPR sounds good.

The forum for the War Thunder game had a discussion on the accuracy of the Challenger 2 tank which ended up with a man who claims to be a UK tank commander posting part of a classified repair manual [5]. That’s pretty amusing, and also good advertising for War Thunder. After reading about this I discovered that it’s free on Steam and runs on Linux! Unfortunately it whinged about my video drivers and refused to run.

Corey Doctorow has an insightful and well researched article about the way the housing market works in the US [6]. For house prices to increase conditions for renters need to be worse, that may work for home owners in the short term but then in the long term their children and grandchildren will end up renting.


Sam VargheseMacdonald leaves Q+A with little comment from the media

The departure of Hamish Macdonald from the position of host of the ABC’s Q+A program should, logically, have occasioned some comment from the country’s media, given that the program in question is one of the taxpayer funded channel’s flagship offerings.

That it has gone mostly unremarked is due to one reason: Macdonald is perceived as being from the left and publications who tilt towards that side of politics have remained silent as a show of solidarity.

To date, nothing has appeared to analyse why he quite what is a high-profile role in Australia. Some said he had left the program because he had experienced a lot of trolling on social media — he shut down his Twitter account though a lot of interaction for Q+A takes place through this platform — while others studiously avoided speculating on why Macdonald may have decided to return to Channel 10’s The Project.

After he left, Q+A has had three hosts, all senior ABC broadcasters, in rotation: Virginia Trioli, David Speers and Stan Grant. No publication has said much about the impact they have had on the show, with only the Guardian commenting, “Although the ratings for Q+A fluctuate according to COVID lockdowns and the political climate, the show did attract higher ratings when Macdonald was absent and there were fill-in hosts including Virginia Trioli, David Speers and Stan Grant.”

In April, the Nine newspapers wrote about the show’s poor ratings after Macdonald was installed in the presenter’s chair. “Audience figures for Q+A have plummeted this year. Last week [25 March], it failed to crack the top 20 free-to-air programs on the Thursday night it aired, indicating a capital city audience of just 237,000. In March 2020, the number was above 500,000, and likewise in March 2016,” reviewer Craig Mathieson wrote.

Macdonald had to cope with Q+A’s switch from Monday to Thursday and also had to replace a much more experienced and mature host, Tony Jones. Being relatively young — he is close to 40 — and clearly more suited to tabloid TV, Macdonald was not exactly the best choice for Q+A.

Additionally, there is an old adage that says, “leave well alone”, meaning that one should not disturb settings that make a program function well. But Macdonald meddled with things which were running smoothly and that may have played a role in the dismal ratings that Q+A suffered.

For one, the program changed its name from Q&A to Q+A, an indication that superficiality was going to be the order under Macdonald. The ABC prides itself on being a serious news channel; that is what it claims as its strong point. He also tried to make the program more activist and sometimes made it resemble a news interview, bringing in people apart from the panellists to speak about the topic du jour.

As I have written before, he often lost control of the proceedings, saying: “He seems to be trying too hard to differentiate himself from Jones, bringing too many angles to a single episode and generally trying to engineer gotcha situations. It turns to be quite juvenile. One word describes him: callow. It is one that can be applied to many of the ABC’s recent recruits.”

The ABC has put off the decision of appointing a full-time host for the program, preferring to see out the year with Trioli, Speers and Grant taking turns for the remaining few months of its functional year. The ABC normally shuts down regular programs like Q+A in November and brings them back in February the following year.

But it is good that Q+A has been made watchable again – except when Grant hosts, because he is too much in love with the sound of his own voice. However two out of three ain’t bad.


David BrinWhat's really up with UAPs / UFOs?

Okay so what’s up with the whole UAP/UFO thing? While the most recent wave of reports and commentaries appears to have ebbed - for now - I’ve mostly held back in order to distill… not answers, but badly-needed questions.

Indeed, I've explored notions of the "alien" all my life, in both fiction and science. I helped write the "SETI Protocols" and have been deeply involved in debates over METI or "messaging" extraterrestrials*…  and my novel Existence** takes on the most likely kind of visitors to our solar system: long-lived observation probes, robots which might even now  'lurk' in corners like the Asteroid Belt. Indeed, I give a small chance that the much discussed "UAP" phenomena could - conceivably - be expendable drones or beam spots sent by such lurkers. Make that a VERY small chance... and none at all that these phenomena are "ships" bearing organic interstellar travelers who behave stupidly and with stunning rudeness, while flitting about in violation of every law of physics. (A notion I rant about here in my short story Those Eyes.)

(The SETI Institute has issued a carefully evasive position paper on the topic, essentially saying "we'll stay in our lane.")

Sure, a majority have already been explained by careful analyses of receding jet engine exhausts or balloons etc., viewed by rapidly swinging optics. Still, there remain a fair number of mysterious dots and “tic-tacs” and wildly-rapidly moving ball-thingies. And so, let’s see if we can bypass the execrably dumb and myopic ‘discussion,’ so far, by first stepping back to ask some really fundamental questions, like:

a) Why do UFO images keep getting fuzzier, when there are about a million times as many cameras than in the 1950s? (And legendary science pundit John Gribbin asks how many of these claims involve observers viewing from multiple directions?)

b) A whole lot depends on whether these sighted 'UAPs' are actually opaque physical objects that affect their surroundings and block passage of light from behind them! Or else, are they glowing spots of excited air that pass through light from the background behind them (translucent)? I have not seen this question even posed by any of the sides in this topic and it is crucial!  In fact, is there any verification that these ‘objects’ are actually 'objects' at all, and not simply balls of moving energetic phenomena? There’s a huge difference! Moreover, image analysis ought to answer this crucial question.

That one question would help settle whether they actually possess their own continuous mass and solidity and inertia for the supposed magical propulsion systems to miraculously overcome.  If not, then we have an explanation for how they can behave in apparently non-newtonian, non-inertial and even non-einsteinian ways, which is permissible to 'objects' that have no mass. (We'll come back to this.)

c) Heck, while we are listing observable traits that have neither been reported on nor asked about by any of the pundits or experts I have seen: …. are these glowing patches, blobs or “tic-tacs” radiating in just one or two colors?

If so, monochromatic emission lines would be a huge tell.  Especially if it just happens to be an excited state of Nitrogen, Oxygen, Carbon-dioxide, neon or water vapor.  (ASIDE: The great science fiction author Liu Cixin is fascinated by ball lightning, which phenomenologically overlaps, somewhat, with UAPs.)

d) There are other traits one never sees either described or even posed as questions, except by just one of my blogmunity members:I've never seen shock waves or ionization trails coming off them. Space aliens may have fancy tech, but the atmosphere has basic physics to abide. If physical devices, they should be leaving ionized tails of superheated air while zipping around like meteors. Same with those flying dots that seem to hurtle mere meters over the surface of the ocean. There should be huge plumes of water from the shock waves. I don’t care what kind of magic tech shields the ‘ship’ itself has. It’s still displacing a whole lot of air, vastly quicker than the speed of sound. What? No acoustic booms? No cloaking system can mask the shoving aside of air by sudden, massive forces.”

e) Why do the vast majority of recent sightings appear to happen at US military training areas? (See an exceptionally good piece speculating cogently on why the Pentagon is now encouraging service members to file UAP sightings… in order to get practical, useful error reports on electronic warfare gear! Which is of course consistent with my long-hinted theory about the real source of all these sightings. )

f) Getting back to fundamentals of motive and behavior: Why should we pay the slightest attention to "visitors" who behave like rude jerks? (Again, I say snub-em!)

Now, polymath Prof. Robin Hanson proposes they might have a reason for behaving this way. "To induce our cooperation, their plan is put themselves at the top of our status ladder. After all, social animals consistently have status ladders, with low status animals tending to emulate the higher. So if these aliens hang out close to us for a long time, show us their very impressive abilities, but don’t act overtly hostile, then we may well come to see them as very high status members of our tribe. Not powerful hostile outsiders."

I deem that to be pretty hard a stretch, since our natural response to nasty tricks is with hostility and determination to get smarter/stronger, fast. Anyway, it’s clear from the history of colonialism on Earth that Robin’s proposed method was never, even once, used to dazzle and cow native peoples. The Portuguese did not conquer Indonesia by coating their ships in glitter and sailing quickly by, while shouting “ooga booga!” for 80 years without making actual contact. Instead, the classic approach used by conquerers back to Chinese and Persian and African dynasties - and especially European colonizers - was to co-opt and suborn the local tribe or nation's top, leadership clade. Use power and wealth and blackmail and targeted assassinations to install your puppets and help them overcome local rivals. Superior aliens? No need for stunts if you have sufficient computational ability to learn our language and do those same things. And one can argue that recent US history is… well… compatible. (Especially the blackmail part!)

Which of course leads us back to listing and comparing alien-probe scenarios, as I did in Existence.  And yes, I still say, let’s get mighty and scientific and get OUT there… and if the lurkers do exist, corner and grill em… but till then, if they are pulling “UFO” crap, snub em!

Back to questions I’ve not seen elsewhere:

g) Why haven’t successive U.S. administrations who hated each other used "the truth" as a political weapon against the other party? (You think ‘mature consensus’ explains it?) Or else tell us why 80 years of our BEST scientists and engineers would have studied this stuff - thousands of our best - and not one first-rater has ever offered a scintilla of tangible or useful proof. Or why we’ve seen no great tech leaps to explode out of such research? 

Sure, there may be reasons for secrecy so compelling that all of the tens of thousands of humans who are in-the-know agree to keep silent. (As portrayed in my story “Senses, Three and Six.”) But in that case, who are YOU to over-rule such a consensus by tens of thousands of our best, who know vastly more than you do? What stunningly conceited, self-indulgent arrogance!

h) Above all, I never cease wondering why so many of our neighbors obsess on so-called "events" and UFO scenarios that are so infuriatingly unimaginative, ill-informed and just plain DULL, when the actual universe that is unfolding before science is so much more interesting… and the cogent speculations of higher-order science fiction are even better, still! ;-)

== Cat lasers ==

My own hypothesis for what’s going on?  Well, it needs to be consistent with all of the above, while also offering a reason why the US defense establishment is suddenly so complacent about allowing UFO speculation to go wild, with smiles and shrugs and even encouragement!  And yes, all of that combines with the following.

First, wanna make a bright dot zip around at unbelievably high “gee” accelerations and even faster than light? Get a very strong laser pointer. Go somewhere you can clearly see a wall many miles away. Like the Grand Canyon. Swipe left or right. If your wrist-flick was quick enough, that dot moved faster than the speed of light!  (Better yet, flick your beam across the visible face of the Moon; you’ll need a strong laser! You may not see it, but calculate the arc and clearly you can exceed “c’ with that dot, without even flicking hard!)

Now zigzag it around across that wall. If it were physical, your laser dot'd be accelerating at some ridiculous crush, say 900gees. Work it out. 

How can such a ‘cat laser,’ (messing with our heads the way we do with our pets) move faster than the speed of light, and zigging with impossible accelerations? See the answer below. But first, is it even possible that aliens - or giggling humans - could make ‘cat laser’ dots or tic-tacs or balls appear in mid-air, rather than merely against a wall?

Well, start with military laser systems for ionizing streaks of air and painting fake objects in the sky to serve as decoys. Here's an excellent article. And what's described is is impressively close! But it’s still missing the actual secret sauce.

Even closer, see a version of the likely tech displayed here in the creation of luminous illusions in a patch of atmosphere.  And another here.

All right, we’re almost there, and all based on unclassified material. Yeah, but suppose you want the exciting beams to be entirely INVISIBLE? Necessary if you want to maintain the illusion of a discrete object. Well, you might have them excite infrared shell states that add up to the one you want to glow…. which brings us back to my first few questions, above, hm?

Some of you have put it all together by now. How the simplest hypothesis for these ‘sightings’ does not have to be the one calling for magical tech used by nasty, illogical aliens. 

== Final thought on cat-teasers ==

Okay, back to that last question: how does that cat-laser dot move at incredible gee accelerations and possibly exceed light-speed? After all that I said up to this point, you may be surprised to learn it's not because the light beam has no mass!  No, the reason is entirely different.

 It is because each individual, momentary spot that makes up that streak on the other side of the Grand Canyon or the face of the Moon - or your nearby, cat-clawed couch - departed from your hand laser separately. (If you are having trouble visualizing, try this with a garden hose; the droplets or splooshes are distinct. The wet streak on the fence only appears to be a connected thing.) 

Each very-brief dot your laser made on that wall - or the moon - was a separate phenomenon, adding together to offer the illusion of a continuing object. In fact, each transitory dot has nothing to do with the spots that came before or after, each of which traveled from your pointer to the wall at the speed of light (in air.)

This is very well-known. Astronomers can point at countless phenomena in space that seem to move faster than light. Phenomena - like the Searchlight Effect - can do that. Physical objects cannot. 

Got it?

== Aliens or not, stop falling for this malarkey ==

And yes, my biggest complaint about UFO nuttery is not that I am sure it’s not aliens! 

I am not certain of that! Though I know the range of possibilities about the alien as well as any living human. Heck, I’ll speculate about aliens at the drop of a molecule! 

No, my complaint, again, is that UFO nuttery is boring! Leaping to clutch the dumbest, most stereotypical and mystically primitive ‘theory,’ slathering on a voluptuous splatter of "I'm such a rebel" anti-authority pretentiousness, and then smacking in happy smugness like those French castle guards in Monty Python and the Holy Grail

Whether these are dumb distracto-theories or actual space-jerks messing with us, both are just lazy farts sent in our general direction.

Ask questions and do better. 


* “Shouting At the Cosmos” – about METI “messaging” to aliens 

** The lively fun video trailer for Existence


David BrinLet's bring PREDICTION into politics, as it works in science!

 How well can we predict our near future? It's a perennial theme here, since my many jobs almost all involve thinking about tomorrow (Don't stop! It'll soon be here.) 

In fact, my top tactical recommendation from Polemical Judo is to make politics more about who's been right more often. Whether it's about using wagers (it works!) to get yammerers to back off, or simply comparing real world outcomes from each party's policies, or the vastly more important recommendation that we track predictive success in general... there's really nothing more useful and important that we aren't already doing.

== Prediction redux ==

This article well-summarizes the findings of Wharton Professor Philip Tetlock (author of Superforecasting: The Art & Science of Prediction), whose research between 1984 and 2004 showed that the average quality of predictions – explicit and honest and checkable ones – made by experts was little better than chance:

Open any newspaper, watch any TV news show, and you find experts who forecast what’s coming. Some are cautious. More are bold and confident. A handful claim to be visionaries able to see decades into the future. With few exceptions, they are not in front of the camera because they possess any proven skill at forecasting. Accuracy is seldom even mentioned… The one undeniable talent they have is their skill at telling a compelling story with conviction, and that is enough. Many have become wealthy peddling forecasting of untested value to corporate executives, government officials and ordinary people who would never think of swallowing medicine of unknown efficacy and safety but who routinely pay for forecasts that are as dubious as elixirs sold from the back of a wagon.”

Though looking closer, Tetlock found that there were actually two statistically distinguishable groups of experts: the first failed to do better than the chimp (and often worse) but the second beat the chimp (though not by a wide margin.)

Following up – (and I’ve written about this before, including a damn good short story!) – "Tetlock’s Good Judgement Project, which commenced in 2011 in association with IARPA  (part of the Office of the Director of National Intelligence in the U.S.), found that (somewhat above-average) ordinary people, without access to highly classified intelligence information (but given access to broad-unclassified information), could make better forecasts about geopolitical events than professional analysts supported by a multi-billion dollar apparatus." (The parentheticals I added, because they matter!)

“It turned out that the top forecasters in the Good Judgement Project were 30% better than intelligence officers with access to actual classified information, and 60% better than the average.

I’ve been on this topic for decades because I think there’s no more important project imaginable than a broad spectrum effort to find out who is right a lot!  Elsewhere I called for predictions registries which – voluntarily or involuntarily – would track forecasts and outcomes. At minimum, it would be a way of giving credibility to those who have earned it!  Moreover, it would let us study whatever methodology (even unconscious) was leading to the better results.

== And the best prediction tests are wagers! ==

Here’s a fascinating tale – about a wager between Kevin Kelly – founder of WIRED Magazine – and Kirkpatrick Sale – author of numerous tomes (Rebels Against the Future) denouncing technology, modernity and calling for a mass world population culling, leading to a simplified life of hand farming villages. 

To be clear, I am partisan – Kevin is a friend and his ethos is very close to mine. 6000 years of history and even more millennia of archaeological findings show how utterly miserable life was for denizens of those “pastoral’ societies, yes even the horrifically brutal owner-lords who crushed freedom in 99% of human societies; even they suffered from parasites and soul-crushing ignorance and the surprise death of almost every child. That experiment has been tried, and absolutely always failed to deliver the happiness that Sale romantically claims they did. 

In the mid-2000s, Sale cofounded the Middlebury Institute to promote the idea of secession. If states peeled off from the union, the theory went, Sale’s decentralized vision might get a little closer to reality. He was disappointed that the movement did not gain steam when George W. Bush was reelected. His romance with decentralization even led him to a blinkered view of the Confederacy, which he lauded for its commitment to concentrating power locally.”

But I digress. The crux is that Sale accepted a bet from Kelly, over whether by 2020 the world would be a hellscape. “
Sale extemporaneously cited three factors: an economic disaster that would render the dollar worthless, causing a depression worse than the one in 1930; a rebellion of the poor against the monied; and a significant number of environmental catastrophes.”

 So how does today’s world of 2021 compare? Yes, these are dangerous times and the questions of class struggle and saving the planet are still... very serious questions. Their shared editor adjudicated at the end of 2020, and twisted himself into a knot to give Sale the benefit of the doubt... yet still he ruled in Kelly’s favor, because, um... aren’t you reading this in comfort and real hope for better times?

While the topics and facts about the 25 year bet are interesting, it is the meta that interests me! For the wager itself is a process for cornering the dogmatic! One I have been pushing for a decade as the only way it’s ever possible to pin dogmatists against a wall of actual facts.

Oh, you won’t make a cent. Kirkpatrick Sale has refused to accept that he lost, despite adjudication by the agreed-upon judge, who bent over backwards to concede some points to Sale. Only a cad would do that, but you’ll get the same result when you corner a MAGA fanatic with a wager demand. As any of our ancestors would testify, across 6000 years, anti-modernist, science hating, pastoralist-feudalist-nostalgist-romantics are also rationalizing liars. They won't pay any wager or ever recite the holy catechism of science: "I might be wrong."

But that’s not the point.  For unlike Sale, your average MAGA lives for Macho. And refusing to either bet like a man or pay up leaves him exposed as a pants-wetting, wriggly-squirming weenie. And that savaging of his manly cred matters! It shatters their circle jerks – their nuremberg rallies of magical lie-incantations. 

And their wives (who can still vote) notice.

It doesn’t always work perfectly. But it is the only thing that does work.

== Trickle Down? It’s not just a phrase ==

Okay, the right is yowling over the proposed price tags for Biden/Democratic interventions, Yes, on paper $6 trillion is more than the estimated $4 trillion that Republicans have spent on their versions of stimulus... Supply Side gifts to the aristocracy.  I admit that the total is bigger.


1. Biden will not get it all.

2. Biden is a sincere Keynesian - unlike the maniacs to his far left who subscribe to MMT "Modern Monetary Theory," which is almost as insane as Supply Side! 

 A sincere Keynesian spends freely during harsh times to do needful things to grow the middle class... then uses boom times to pay down debt or at least keep deficits below GDP growth.  That wing of the Democratic party has credibility at keeping that promise!  Clinton, Jerry Brown, Gavin Newsom... all used good times to pay down debt. Again let's bet over whether republicans have ever been more fiscally responsible. Ever.

If Republicans were sincere, they would now say "all right, our method failed, so it's your turn to try yours. But we demand assurances that the pay-down part of the cycle is part of the plan." And sprprise. If they demanded that, they'd get it, But that's not what they are after.

3. The most important factor though is effectiveness of investment.  BOTH parties seek to pour trillions into stimulus - with this difference. Supply Side (SS) stimulus of trillions added to the coffers of the rich does not work even slightly!  Adam Smith said it wouldn't, and once again the Scottish Sage of 1776 proved right. 

Very few of the open-mawed recipients of SS largesse ever invested in R&D, new products or productive capacity. Most poured it (as Smith said) into rentier properties, capital preservation and asset bubbles. And bizarre plutocratic, gilded-excesses like NFTs. Key point: Money velocity plummets to near zero!

That last one is the ultimate refutation. Perhaps some Republicans sincerely believed in Supply Side, in the beginning. But after FOUR perfect failures, it is now nothing but a mad cult, doubling down on magical chants and incantations.

In contrast we know that a trillion in infrastructure spending will at-minimum rebuild bridges and pump up Money Velocity (MV). It will very likely reduce poverty and help poor kids to become Smithian competitors. History shows that it will stimulate small business startups. It will pump R&D and domestic-sourced production. And it cannot hurt to spend some of it to reduce pollution.

(In fact, McConnell has openly said he opposes all this because it might actually work.)

Okay yes, I admit this. One Keynesian excess -- "guns & butter" during Vietnam -- resulted in overheated MV and hyper inflation. That is a danger!  One that few economists fear right now (see below).  But that was an exception. MOST Keynesian interventions did result in booms and increased tax revenues off higher economic activity and resulting deficit reduction.

This is about the difference between one system that is largely proved, that has some dangers but is based upon factual historical experience... versus another that has utterly failed FOUR TIMES, that is scientifically utterly disproved, and that is now nothing more than a cult of chanted incantations. 

This isn't about 'left' vs. 'right.' It is about sane vs. insane.

Both sides want to 'invest' budget-busting trillions of stimulus. With the difference that one method stimulates and eventually pays for itself while the other is voodoo.

 I think it's time to go back to the wisdom of the Greatest Generation, who built the American Pax and infrastructure and universities and the biggest thriving middle class and the beginnings of social justice and the best time in the history of our species.

And finally.... 

Show me anyone who predicted this - and explicitly - earlier than in  my novel Earth and my nonfiction The Transparent Society. See this study: “Body-Worn Camera Research Shows Drop In Police Use Of Force.”  

No seriously. That's not a brag, but genuine curiosity. I can think of one example, though it's kinda extreme.


METhoughts about RAM and Storage Changes

My first Linux system in 1992 was a 386 with 4MB of RAM and a 120MB hard drive which (for some reason I forgot) only was supported by Linux for about 90MB. My first hard drive was 70MB and could do 500KB/s for contiguous IO, my first Linux hard drive was probably a bit faster, maybe 1MB/s. My current Linux workstation has 64G of RAM and 2*1TB NVMe devices that can sustain about 1.1GB/s. The laptop I’m using right now has 8GB of RAM and a 180GB SSD that can do 380MB/s.

My laptop has 2000* the RAM of my first Linux system and maybe 400* the contiguous IO speed. Currently I don’t even run a VM with less than 4GB of RAM, NB I’m not saying that smaller VMs aren’t useful merely that I don’t happen to be using them now. Modern AMD64 CPUs support 2MB “huge pages”. As a proportion of system RAM if I used 2MB pages everywhere they would be a smaller portion of system RAM than the 4KB pages on my first Linux system!

I am not suggesting using 2MB pages for general systems. For my workstations the majority of processes are using less than 10MB of resident memory and given the different uses for memory mapped shared objects, memory mapped file IO, malloc(), stack, heap, etc there would be a lot of inefficiency having 2MB the limit for all allocation. But as systems worked with 4MB of RAM or less and 4K pages it would surely work to have only 2MB pages with 64GB or more of RAM.

Back in the 90s it seemed ridiculous to me to have 256 byte pages on a 68030 CPU, but 4K pages on a modern AMD64 system is even more ridiculous. Apparently AMD64 supports 1GB pages on some CPUs, that seems ridiculously large but when run on a system with 1TB of RAM that’s comparable to 4K pages on my first Linux system. Currently AWS offers 24TB EC2 instances and the Google Cloud Project offers 12TB virtual machines. It might even make sense to have the entire OS using 1GB pages for some usage scenarios on such systems, wasting tens of GB of RAM to save TLB thrashing might be a good trade-off.

My personal laptop has 200* the RAM of my first Linux system and maybe 400* the contiguous IO speed. An employer recently assigned me a Thinkpad Carbon X1 Gen6 with an NVMe device that could sustain 5GB/s until the CPU overheated, that’s 5000* the contiguous IO speed of my first Linux hard drive. My Linux hard drive had a 28ms average access time and my first Linux hard drive probably was a little better, let’s call it 20ms for the sake of discussion. It’s generally quoted that access times for NVMe are at best 10us, that’s 2000* better than my first Linux hard drive. As seek times are the main factor for swap performance a laptop with 8GB of RAM and a fast NVMe device could be expected to give adequate performance with 2000* the swap of my first Linux system. For the work laptop in question I had 8G of swap and my personal laptop has 6G of swap which is somewhat comparable to the 4MB of swap on my first Linux system in that swap is about equal to RAM size, so I guess my personal laptop is performing better than it can be expected to.

These are just some idle thoughts about hardware changes over the years. Don’t take it as advice for purchasing hardware and don’t take it too seriously in general. Also when writing comments don’t restrict yourself to being overly serious, feel free to run the numbers on what systems with petabytes of Optane might be like, speculate on what NUMA systems in laptops might be like, etc. Go wild.