Planet Russell

,

LongNowSix Ways to Think Long-term: A Cognitive Toolkit for Good Ancestors

Image for post
Illustration: Tom Lee at Rocket Visual

Human beings have an astonishing evolutionary gift: agile imaginations that can shift in an instant from thinking on a scale of seconds to a scale of years or even centuries. Our minds constantly dance across multiple time horizons. One moment we can be making a quickfire response to a text and the next thinking about saving for our pensions or planting an acorn in the ground for posterity. We are experts at the temporal pirouette. Whether we are fully making use of this gift is, however, another matter.

The need to draw on our capacity to think long-term has never been more urgent, whether in areas such as public health care (like planning for the next pandemic on the horizon), to deal with technological risks (such as from AI-controlled lethal autonomous weapons), or to confront the threats of an ecological crisis where nations sit around international conference tables, bickering about their near-term interests, while the planet burns and species disappear. At the same time, businesses can barely see past the next quarterly report, we are addicted to 24/7 instant news, and find it hard to resist the Buy Now button.

What can we do to overcome the tyranny of the now? The easy answer is to say we more long-term thinking. But here’s the problem: almost nobody really knows what it is.

In researching my latest book, The Good Ancestor: How to Think Long Term in a Short-Term World, I spoke to dozens of experts — psychologists, futurists, economists, public officials, investors — who were all convinced of the need for more long-term thinking to overcome the pathological short-termism of the modern world, but few of them could give me a clear sense of what it means, how it works, what time horizons are involved and what steps we must take to make it the norm. This intellectual vacuum amounts to nothing less than a conceptual emergency.

Let’s start with the question, ‘how long is long-term?’ Forget the corporate vision of ‘long-term’, which rarely extends beyond a decade. Instead, consider a hundred years as a minimum threshold for long-term thinking. This is the current length of a long human lifespan, taking us beyond the ego boundary of our own mortality, so we begin to imagine futures that we can influence but not participate in ourselves. Where possible we should attempt to think longer, for instance taking inspiration from cultural endeavours like the 10,000 Year Clock (the Long Now Foundation’s flagship project), which is being designed to stay accurate for ten millennia. At the very least, when you aim to think ‘long-term’, take a deep breath and think ‘a hundred years and more’.

The Tug of War for Time

It is just as crucial to equip ourselves with a mental framework that identifies different forms of long-term thinking. My own approach is represented in a graphic I call ‘The Tug of War for Time’ (see below). On one side, six drivers of short-termism threaten to drag us over the edge of civilizational breakdown. On the other, six ways to think long-term are drawing us towards a culture of longer time horizons and responsibility for the future of humankind.

Image for post

These six ways to think long are not a simplistic blueprint for a new economic or political system, but rather comprise a cognitive toolkit for challenging our obsession with the here and now. They offer conceptual scaffolding for answering what I consider to be the most important question of our time: How can we be good ancestors?

The tug of war for time is the defining struggle of our generation. It is going on both inside our own minds and in our societies. Its outcome will affect the fate of the billions upon billions of people who will inhabit the future. In other words, it matters. So let’s unpack it a little.

Drivers of Short-termism

Amongst the six drivers of short-termism, we all know about the power of digital distraction to immerse us in a here-and-now addiction of clicks, swipes and scrolls. A deeper driver has been the growing tyranny of the clock since the Middle Ages. The mechanical clock was the key machine of the Industrial Revolution, regimenting and speeding up time itself, bringing the future ever-nearer: by 01700 most clocks had minute hands and by 01800 second hands were standard. And it still dominates our daily lives, strapped to our wrists and etched onto our screens.

Speculative capitalism has been a source of boom-bust turbulence at least since the Dutch Tulip Bubble of 01637, through to the 02008 financial crash and the next one waiting around the corner. Electoral cycles also play their part, generating a myopic political presentism where politicians can barely see beyond the next poll or the latest tweet. Such short-termism is amplified by a world of networked uncertainty, where events and risks are increasingly interdependent and globalised, raising the prospect of rapid contagion effects and rendering even the near-term future almost unreadable.

Looming behind it all is our obsession with perpetual progress, especially the pursuit of endless GDP growth, which pushes the Earth system over critical thresholds of carbon emissions, biodiversity loss and other planetary boundaries. We are like a kid who believes they can keep blowing up the balloon, bigger and bigger, without any prospect that it could ever burst.

Put these six drivers together and you get a toxic cocktail of short-termism that could send us into a blind-drunk civilizational freefall. As Jared Diamond argues, ‘short-term decision making’ coupled with an absence of ‘courageous long-term thinking’ has been at the root of civilizational collapse for centuries. A stark warning, and one that prompts us to unpack the six ways to think long.

Six Ways to Think Long-term

1. Deep-Time Humility: grasp we are an eyeblink in cosmic time

Deep-time humility is about recognising that the two hundred thousand years that humankind has graced the earth is a mere eyeblink in the cosmic story. As John McPhee (who coined the concept of deep time in 01980) put it: ‘Consider the earth’s history as the old measure of the English yard, the distance from the king’s nose to the tip of his outstretched hand. One stroke of a nail file on his middle finger erases human history.’

But just as there is deep time behind us, there is also deep time ahead. In six billion years, any creatures that will be around to see our sun die, will be as different from us, as we are from the first single-celled bacteria.

Yet why exactly do long-term thinkers need this sense of temporal humility? Deep time prompts us to consider the consequences of our actions far beyond our own lifetimes, and puts us back in touch with the long-term cycles of the living world like the carbon cycle. But it also helps us grasp our destructive potential: in an incredibly short period of time — only a couple of centuries — we have endangered a world that took billions of years to evolve. We are just a tiny link in the great chain of living organisms, so who are we to put it all in jeopardy with our ecological blindness and deadly technologies? Don’t we have an obligation to our planetary future and the generations of humans and other species to come?

2. Legacy Mindset: be remembered well by posterity

We are the inheritors of extraordinary legacies from the past — from those who planted the first seeds, built the cities where we now live, and made the medical discoveries we benefit from. But alongside the good ancestors are the ‘bad ancestors’, such as those who bequeathed us colonial and slavery-era racism and prejudice that deeply permeate today’s criminal justice systems. This raises the question of what legacies we will leave to future generations: how do we want to be remembered by posterity?

The challenge is to leave a legacy that goes beyond egoistic legacy (like a Russian oligarch who wants a wing of an art gallery named after them) or even familial legacy (like wishing to pass on property or cultural traditions to our children). If we hope to be good ancestors, we need to develop a transcendent ‘legacy mindset’, where we aim to be remembered well by the generations we will never know, by the universal strangers of the future.

We might look for inspiration in many places. The Māori concept of whakapapa (‘genealogy’), describes a continues lifeline that connects an individual to the past, present and future, and generates a sense of respect for the traditions of previous generations while being mindful of those yet to come. In Katie Paterson’s art project Future Library, every year for a hundred years a famous writer (the first was Margaret Atwood) is depositing a new work, which will remain unread until 02114, when they will all be printed on paper made from a thousand trees that have been planted in a forest outside Oslo. Then there are activists like Wangari Maathai, the first African woman to win the Nobel Peace Prize. In 01977 she founded the Green Belt Movement in Kenya, which by the time of her death in 02011 had trained more than 25,000 women in forestry skills and planted 40 million trees. That’s how to pass on a legacy gift to the future.

3. Intergenerational Justice: consider the seventh generation ahead

“Why should I care about future generations? What have they ever done for me?’ This clever quip attributed to Groucho Marx highlights the issue of intergenerational justice. This is not the legacy question of how we will be remembered, but the moral question of what responsibilities we have to the ‘futureholders’ — the generations who will succeed us.

One approach, rooted in utilitarian philosophy, is to recognise that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. How could we possibly ignore their wellbeing, and think that our own is of such greater value?

Image for post

Such thinking is embodied in the idea of ‘seventh-generation decision making’, an ethic of ecological stewardship practised amongst some Native American peoples such as the Oglala Lakota Nation in South Dakota: community decisions take into the account the impacts seven generations from the present. This ideal is fast becoming a cornerstone of the growing global intergenerational justice movement, inspiring groups such as Our Children’s Trust (fighting for the legal rights of future generations in the US) and Future Design in Japan (which promotes citizens’ assemblies for city planning, where residents imagine themselves being from future generations).

4. Cathedral thinking: plan projects beyond a human lifetime

Cathedral thinking is the practice of envisaging and embarking on projects with time horizons stretching decades and even centuries into the future, just like medieval cathedral builders who began despite knowing they were unlikely to see construction finished within their own lifetimes. Greta Thunberg has said that it will take ‘cathedral thinking’ to tackle the climate crisis.

Historically, cathedral thinking has taken different forms. Apart from religious buildings, there are public works projects such as the sewers built in Victorian London after the ‘Great Stink’ of 01858, which are still in use today (we might call this ‘sewer thinking’ rather than ‘cathedral thinking’). Scientific endeavours include the Svalbard Global Seed in the remote Arctic, which contains over one million seeds from more than 6,000 species, and intends to keep them safe in an indestructible rock bunker for at least a thousand years. We should also include social and political movements with long time horizons, such as the Suffragettes, who formed their first organisation in Manchester in 01867 and didn’t achieve their aim of votes for women for over half a century.

Inspiring stuff. But remember that cathedral thinking can be directed towards narrow and self-serving ends. Hitler hoped to create a Thousand Year Reich. Dictators have sought to preserve their power and privilege for their progeny through the generations: just look at North Korea. In the corporate world, Gus Levy, former head of investment bank Goldman Sachs, once proudly declared, ‘We’re greedy, but long-term greedy, not short-term greedy’.

That’s why cathedral thinking alone is not enough to create a long-term civilization that respects the interests of future generations. It needs to be guided by other approaches, such as intergenerational justice and a transcendent goal (see below).

5. Holistic Forecasting: envision multiple pathways for civilization

Numerous studies demonstrate that most forecasting professionals tend to have a poor record at predicting future events. Yet we must still try to map out the possible long-term trajectories of human civilization itself — what I call holistic forecasting — otherwise we will end up only dealing with crises as they hit us in the present. Experts in the fields of global risk studies and scenario planning have identified three broad pathways, which I call Breakdown, Reform and Transformation (see graphic below).

Image for post

Breakdown is the path of business-as-usual. We continue striving for the old twentieth-century goal of material economic progress but soon reach a point of societal and institutional collapse in the near term as we fail to respond to rampant ecological and technological crises, and cross dangerous civilizational tipping points (think Cormac McCarthy’s The Road).

A more likely trajectory is Reform, where we respond to global crises such as climate change but in an inadequate and piecemeal way that merely extends the Breakdown curve outwards, to a greater or lesser extent. Here governments put their faith in reformist ideals such as ‘green growth’, ‘reinventing capitalism’, or a belief that technological solutions are just around the corner.

A third trajectory is Transformation, where we see a radical shift in the values and institutions of society towards a more long-term sustainable civilization. For instance, we jump off the Breakdown curve onto a new pathway dominated by post-growth economic models such as Doughnut Economics or a Green New Deal.

Note the crucial line of Disruptions. These are disruptive innovations or events that offer an opportunity to switch from one curve onto another. It could be a new technology like blockchain, the rise of a political movement like Black Lives Matter, or a global pandemic like COVID-19. Successful long-term thinking requires turning these disruptions towards Transformative change and ensuring they are not captured by the old system.

6. Transcendent Goal: strive for one-planet thriving

Every society, wrote astronomer Carl Sagan, needs a ‘telos’ to guide it — ‘a long-term goal and a sacred project’. What are the options? While the goal of material progress served us well in the past, we now know too much about its collateral damage: fossil fuels and material waste have pushed us into the Anthropocene, the perilous new era characterised by a steep upward trend in damaging planetary indicators called the Great Acceleration (see graphic).

Image for post
See an enlarged version of this graphic here.

An alternative transcendent goal is to see our destiny in the stars: the only way to guarantee the survival of our species is to escape the confines of Earth and colonise other worlds. Yet terraforming somewhere like Mars to make it habitable could take centuries — if it could be done at all. Additionally, the more we set our sights on escaping to other worlds, the less likely we are to look after our existing one. As cosmologist Martin Rees points out, ‘It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these problems here.’

That’s why our primary goal should be to learn to live within the biocapacity of the only planet we know that sustains life. This is the fundamental principle of the field of ecological economics developed by visionary thinkers such as Herman Daly: don’t use more resources than the earth can naturally regenerate (for instance, only harvest timber as fast as it can grow back), and don’t create more wastes than it can naturally absorb (so avoid burning fossil fuels that can’t be absorbed by the oceans and other carbon sinks).

Once we’ve learned to do this, we can do as much terraforming of Mars as we like: as any mountaineer knows, make sure your basecamp is in order with ample supplies before you tackle a risky summit. But according to the Global Footprint Network, we are not even close and currently use up the equivalent of around 1.6 planet Earths each year. That’s short-termism of the most deadly kind. A transcendent goal of one-planet thriving is our best guarantee of a long-term future. And we do it by caring about place as much as rethinking time.

Bring on the Time Rebellion

So there is a brief overview of a cognitive toolkit we could draw on to survive and thrive into the centuries and millennia to come. None of these six ways is enough alone to create a long-term revolution of the human mind — a fundamental shift in our perception of time. But together — and when practised by a critical mass of people and organisations — a new age of long-term thinking could emerge out of their synergy.

Is this a likely prospect? Can we win the tug of war against short-termism?

‘Only a crisis — actual or perceived — produces real change,’ wrote economist Milton Friedman. Out of the ashes of World War Two came pioneering long-term institutions such as the World Health Organisation, the European Union and welfare states. So too out of the global crisis of COVID-19 could emerge the long-term institutions we need to tackle the challenges of our own time: climate change, technology threats, the racism and inequality structured into our political and economic systems. Now is the moment for expanding our time horizons into a longer now. Now is the moment to become a time rebel.


Roman Krznaric is a public philosopher, research fellow of the Long Now Foundation, and founder of the world’s first Empathy Museum. His latest book is The Good Ancestor: How to Think Long Term in a Short-Term World. He lives in Oxford, UK. @romankrznaric

Note: All graphics from The Good Ancestor: How to Think Long Term in a Short-Term World by Roman Krznaric. Graphic design by Nigel Hawtin. Licensed under CC BY-NC-ND.

Worse Than FailureMega-Agile

A long time ago, way back in 2009, Bruce W worked for the Mega-Bureaucracy. It was a slog of endless forms, endless meetings, endless projects that just never hit a final ship date. The Mega-Bureaucracy felt that the organization which manages best manages the most, and ensured that there were six tons of management overhead attached to the smallest project.

After eight years in that position, Bruce finally left for another division in the same company.

But during those eight years, Bruce learned a few things about dealing with the Mega-Bureaucracy. His division was a small division, and while Bruce needed to interface with the Mega-Bureaucracy, he could shield the other developers on his team from it, as much as possible. This let them get embedded into the business unit, working closely with the end users, revising requirements on the fly based on rapid feedback and a quick release cycle. It was, in a word, "Agile", in the most realistic version of the term: focus on delivering value to your users, and build processes which support that. They were a small team, and there were many layers of management above them, which served to blunt and filter some of the mandates of the Mega-Bureaucracy, and that let them stay Agile.

Nothing, however, protects against management excess than a track record of success. While they had a reputation for being dangerous heretics: they released to test continuously and releasing to production once a month, they changed requirements as needs changed, meaning what they delivered was almost never what they specced, but it was what their users needed, and worst of all, their software defeated all the key Mega-Bureaucracy metrics. It performed better, it had fewer reported defects, it return-on-investment metrics their software saved the division millions of dollars in operating costs.

The Mega-Bureaucracy seethed at these heretics, but the C-level of the company just saw a high functioning team. There was nothing that the Bureaucracy could do to bring them in line-

-at least until someone opened up a trade magazine, skimmed the buzzwords, and said, "Maybe our processes are too cumbersome. We should do Agile. Company wide, let's lay out an Agile Process."

There's a huge difference between the "agile" created by a self-organizing team, that grows based on learning what works best for the team and their users, and the kind of "agile" that's imposed from the corporate overlords.

First, you couldn't do Agile without adopting the Agile Process, which in Mega-Bureaucracy-speak meant "we're doing a very specific flavor of scrum". This meant morning standups were mandated. You needed a scrum-master on the team, which would be a resource drawn from the project management office, and well, they'd also pull double duty as the project manager. The word "requirements" was forbidden, you had to write User Stories, and then estimate those User Stories as taking a certain number of hours. Then you could hold your Sprint Planning meeting, where you gathered a bucket of stories that would fit within your next sprint, which would be a 4-week cadence, but that was just the sprint planning cadence. Releases to production would happen only quarterly. Once user stories were written, they were never to be changed, just potentially replaced with a new story, but once a story was added to a sprint, you were expected to implement it, as written. No changes based on user feedback. At the end of the sprint, you'd have a whopping big sprint retrospective, and since this was a new process, instead of letting the team self-evaluate in private and make adjustments, management from all levels of the company would sit in on the retrospectives to be "informed" about the "challenges" in adopting the new process.

The resulting changes pleased nearly no one. The developers hated it, the users, especially in Bruce's division, hated it, management hated it. But the Mega-Bureaucracy had won; the dangerous heretics who didn't follow the process now were following the process. They were Agile.

That is what motivated Bruce to transfer to a new position.

Two years later, he attended an all-IT webcast. The CIO announced that they'd spun up a new pilot development team. This new team would get embedded into the business unit, work closely with the end user, revise requirements on the fly based on rapid feedback and a continuous release cycle. "This is something brand new for our company, and we're excited to see where it goes!"

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianEnrico Zini: More notable people

René Carmille (8 January 1886 – 25 January 1945) was a French humanitarian, civil servant, and member of the French Resistance. During World War II, Carmille saved tens of thousands of Jews in Nazi-occupied France. In his capacity at the government's Demographics Department, Carmille sabotaged the Nazi census of France, saving tens of thousands of Jewish people from death camps.
Gino Strada (born Luigi Strada; 21 April 1948) is an Italian war surgeon and founder of Emergency, a UN-recognized international non-governmental organization.
Il morbo di K è una malattia inventata nel 1943, durante la Seconda guerra mondiale, da Adriano Ossicini insieme al dottor Giovanni Borromeo per salvare alcuni italiani di religione ebraica dalle persecuzioni nazifasciste a Roma.[1][2][3][4]
Stage races

Sam VargheseThe Indian Government cheated my late father of Rs 332,775

Back in 1976. the Indian Government, for whom my father, Ipe Samuel Varghese, worked in Colombo, cheated him of Rs 13,500 – the gratuity that he was supposed to be paid when he was dismissed from the Indian High Commission (the equivalent of the embassy) in Colombo.

That sum, adjusted for inflation, works out to Rs 332,775 in today’s rupees.

But he was not paid this amount because the embassy said he had contravened rules by working at a second job, something which everyone at the embassy was doing, because what people were paid was basically a starvation wage. But my father had rubbed against powerful interests in the embassy who were making money by taking bribes from poor Sri Lankan Tamils who were applying for Indian passports to return to India.

But let me start at the beginning. My father went to Sri Lanka (then known as Ceylon) in 1947, looking for employment after the war. He took up a job as a teacher, something which was his first love. But in 1956, when the Sri Lankan Government nationalised the teaching profession, he was left without a job.

It was then that he began working for the Indian High Commission which was located in Colpetty, and later in Fort. As he was a local recruit, he was not given diplomatic status. The one benefit was that our family did not need visas to stay in Sri Lanka – we were all Indian citizens – but only needed to obtain passports once we reached the age of 14.

As my father had six children, the pay from the High Commission was not enough to provide for the household. He would tutor some students, either at our house, or else at their houses. He was very strict about his work, and was unwilling to compromise on any rules.

There were numerous people who worked alongside him and they would occasionally take a bribe from here and there and push the case of some person or the other for a passport. The Tamils, who had gone to Sri Lanka to work on the tea plantations, were being repatriated under a pact negotiated by Sirima Bandaranaike, the Sri Lankan prime minister, and Lal Bahadur Shastri, her Indian counterpart. It was thus known as the Sirima-Shastri pact.

There was a lot of anti-Tamil sentiment brewing in Sri Lanka at the time, feelings that blew up into the civil war from 1983 onwards, a conflict that only ended in May 2009. Thus, many Tamils were anxious and wanted to do whatever it took to get an Indian passport.

And in this, they found many High Commission employees more than willing to accept bribes in order to push their cases. But they came up against a brick wall in my father. There was another gentleman who was an impediment too, a man named Navamoni. The others used to call him Koranga Moonji Dorai – meaning monkey face man – as he was a wizened old man. He would lose his temper and shout at people when they tried to mollify him with this or that to push their cases.

Thus, it was only a matter of time before some of my father’s colleagues went to the higher-ups and complained that he was earning money outside his High Commission job. They all were as well, but nobody had rubbed up against the powers-that-be. By then, due to his competence, my father had been put in charge of the passport section, a very powerful post, because he could approve or turn down any application.

The men who wanted to make money through bribes found him a terrible obstacle. One day in May, when my mother called up the High Commission, she was told that my father no longer worked there. Shocked, she waited until he came home to find out the truth. We had no telephone at home.

The family was not in the best financial position at the time. We had a few weeks to return to India as we had been staying in Sri Lanka on the strength of my father’s employment. And then came the biggest shock: the money my father had worked for all those 20 years was denied to him.

We came back to India by train and ferry; we could not afford to fly back. It was a miserable journey and for many years after that we suffered financial hardship because we had no money to tide us over that period.

Many years later, after I migrated to Australia, I went to Indian Consulate in Coburg, a suburb of Melbourne, to get a new passport, I happened to speak to the consul and asked him what I should do with my old passport. He made my blood boil by telling me that it was my patriotic duty to send it by post to the Indian embassy in Canberra. I told him that I owed India nothing considering the manner in which it had treated my father. And I added that if the Indian authorities wanted my old passport, then they could damn well pay for the postage. He was not happy with my reply.

India is the only country in the world which will not restore a person’s citizenship if he asks for it in his later years for sentimental reasons, just so that he can die in the land of his birth. India is also the only country that insists its own former citizens obtain a visa to enter what is their own homeland. Money is not the only thing for the Indian Government; it is everything.

Every other country will restore a person’s citizenship in their latter years of they ask for it for sentimental reasons. Not India.

Planet DebianRuss Allbery: PGP::Sign 1.01

This is mostly a test-suite fix for my Perl module to automate creation and verification of detached signatures.

The 1.00 release of PGP::Sign added support for GnuPG v2 and changed the default to assume that gpg is GnuPG v2, but this isn't the case on some older operating systems (particularly non-Linux ones). That in turn caused chaos for automated testing.

This release fixes the test suite to detect when gpg is GnuPG v1 and run the test suite accordingly, trying to exercise as much of the module as possible in that case. It also fixes a few issues found by Perl::Critic::Freenode.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

,

Planet DebianChris Lamb: The comedy is over

By now, everyone must have seen the versions of comedy shows with the laugh track edited out. The removal of the laughter doesn't just reveal the artificial nature of television and how it conscripts the viewer into laughing along; by subverting key conversational conventions, it reveals some of the myriad and subtle ways humans communicate with one another:

Although the show's conversation is ostensibly between two people, the viewer comprises a silent third actor through which they, and therefore we, are meant to laugh along with. Then, when this third character is forcibly muted, the viewer not only has to endure the stilted gaps, they also sense an uncanny loss of unfamiliarity by losing their 'own' part in the script.

A similar phonenomena can be seen in other artforms. In Garfield Minus Garfield, the forced negative spaces that these pauses introduce are discomfiting, almost to the level of performance art:

But when the technique is applied to other TV shows such as The Big Bang Theory, it is unsettling in entirely different ways, exposing the dysfunctional relationships and the adorkable mysogny at the heart of the show:

Once you start to look for it, the ur-elements of the audience, response and timing in the way we communicate are everywhere, from the gaps we leave so that others instinctively know when you have finished speaking, to the myriad of ways you can edit a film. These components are always present, it is only when one of them is taken away that they become more apparent. Today, the small delays added by videoconferencing adds an uncanny awkwardness to many of our everyday interactions too. It is said that "comedy is tragedy plus timing", so it is unsurprising that Zoom's undermining of timing leads, by this simple calculus of human interactions, to feelings of... tragedy.


§


Leaving aside the usual comments about Pavlovian conditioning and the shows that are the exceptions, complaints against canned laughter are the domain of the pub bore. I will therefore only add two brief remarks. First, rather than being cynically added to artificially inflate the lack of 'real' comedy, laugh tracks were initially added to replicate the live audience of existing shows. In other words, without a laugh track, these new shows might have ironically appeared almost as eerie as the fan edits cited above are today.

Secondly, although laugh tracks are described as "false", this is not entirely correct. After all, someone did actually laugh, even if it was for an entirey different joke. In his Simulacra and Simulation, cultural theorist Jean Baudrillard might have poetically identified canned laughter as a "reflection of a profound reality", rather than an outright falsehood. One day, when this laughter becomes entirely algorithmically generated, Baudrillard would describe it as "an order of sorcery", placing it metaphysically on the same level as the entirely pumpkin-free Pumpkin Spiced Latte.


§


For a variety of reasons I recently decided to try interacting with various social media platforms in new ways. One way of loosening my addiction to this pornography of the amygdala was to hide the number of replies, 'likes' and related numbers:

The effect of installing this extension was immediate. I caught my eyes darting to where the numbers had been and realised I had been subconsciously looking for the input — and perhaps even the outright validation — of the masses. To be sure, these numbers can be relevant and sometimes useful, but they do implicitly involve delegating part of your responsibility of thinking for yourself to the vox populi, or the Greek chorus of the 21st century.

Like many of you reading this, I am sure I told myself that the number of 'likes' has no bearing on whether I should agree with something, but hiding the numbers reveals much of this might have been a convenient fiction; as an entire century of discoveries in behavioural economics has demonstrated, all the pleasingly-satisfying arguments for rational free-market economics stand no chance against our inherent buggy mammalian brains.


§


Tying a few things together, when attempting to doomscroll through social media without these numbers, I realised that social media without the scorecard of engagement is almost exactly like watching these shows without the laugh track.

Without the number of 'retweets', the lazy prompts to remind you exactly when, how and for how much to respond are removed, and replaced with the same stilted silences of those edited scenes from Friends. At times, the existential loneliness of Garfield Minus Garfield creeps in too, and there is more than enough of the dysfunctional, validation-seeking and parasocial 'conversations' of The Big Bang Theory. Most of all, the whole exercise permits a certain level of detached, critical analysis, allowing one to observe that the platforms often feel like a pre-written script with your 'friends' cast as actors, all perpetuated on the heady fumes of rows INSERT-ed into a database on the other side of the world.

I'm not quite sure how this will affect my usage of the platforms, and any time spent away from these sites may mean fewer online connections at a time when we all need them the most. But as the Karal Marling, professor at the University of Minnesota wrote about artificial audiences: "Let me be the laugh track."

Planet DebianAndrew Cater: Debian Stretch release 9.13 - testing of images now complete 202007181950 or so

And we're getting a lot closer. Last edits to the wiki for the testing page. Tests marked as passed. There are a bunch of architectures where the media images are still to be built and the source media will still need to be written as well.

It's been a really good (and fairly long day). I'm very glad indeed that I'm not staying up until the bitter end. It's always fun - I'm also very glad that, as it panned out, we didn't end up also doing the next Buster point release this weekend - it would have been a massive workload.

So Stretch passes to LTS, Jessie has passed to ELTS - and we go onwards and upwards to the next point release - and then, in due course, to Bullseye when it's ready (probably the first quarter of next year?).

As ever, thanks due to all involved - the folks behind the scenes who maintain cdimage.debian.org, to Sledge, RattusRattus, Isy and schweer. Done for about a fortnight or so until the next Buster release.

I will try to blog on other things, honest :)

Planet DebianAndrew Cater: Debian Stretch 9.13 release - blog post 2 - testing of basic .iso images ongoing as at 202007181655

The last of the tests for the basic .iso images are finishing. Live image testing is starting. Lots from RattusRattus, Isy, Sledge and schweer besides myself. New this time round is an experiment in videoconference linking which has made this whole experience much more human in lockdown - and means I'm missing Cambridge.

Two questions that have come up as we've been going through. There are images made for Intel-based mac that were first made for Jessie or so: there are also images for S390X. These can't be tested because none of the testers have the hardware. Nobody has yet recorded issues with them but it could be that nobody has ever used them recently or has reported problems

If nobody is bothered with these: they are prime candidates for removal prior to Bullseye.

Planet DebianDirk Eddelbuettel: tint 0.1.3: Fixes for html mode, new demo

A new version 0.1.3 of the tint package arrived at CRAN today. It corrects some features for html output, notably margin notes and references. It also contains a new example for inline references.

The full list of changes is below.

Changes in tint version 0.1.3 (2020-07-18)

  • A new minimal demo was added showing inline references (Dirk addressing #42).

  • Code for margin notes and reference in html mode was updated with thanks to tufte (Dirk in #43 and #44 addressing #40).

  • The README.md was updated with a new 'See Also' section and a new badge.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: drat 0.1.8: Minor test fix

drat user

A new version of drat arrived on CRAN today. This is a follow-up release to 0.1.7 from a week ago. It contains a quick follow-up by Felix Ernst to correct on of the tests which misbehaved under the old release of R still being tested at CRAN.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

As your mother told you: Friends don’t let friends install random git commit snapshots. Rolled-up releases it is. drat is easy to use, documented by five vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.8 (2020-07-18)

  • The archive pruning test code was corrected for r-oldrel (Felix Ernst in #105 fixing #104).

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.74

Laptop Mode Tools 1.74

Laptop Mode Tools version 1.74 has been released. This release includes important bug fixes, some defaults settings updated to current driver support in Linux and support for devices with nouveau based nVIDIA cards.

A filtered list of changes is mentioned below. For the full log, please refer to the git repository

1.74 - Sat Jul 18 19:10:40 IST 2020

* With 4.15+ kernels, Linux Intel SATA has a better link power
  saving policy, med_power_with_dipm, which should be the recommended
  one to use
* Disable defaults for syslog logging
* Initialize LM_VERBOSE with default to disabled
* Merge pull request #157 from rickysarraf/nouveau
* Add power saving module for nouveau cards
* Disable ethernet module by default
* Add board-specific folder and documentation
* Add execute bit on module radeon-dpm
* Drop unlock because there is no lock acquired

Resources

What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

Planet DebianAndrew Cater: Debian "Stretch" 9.13 release preparations ongoing

Just checking in. Debian "Jessie" == oldoldstable == Debian 8 was the previous Debian Long Term Support release. Debian LTS seeks to provide support for Debian releases for five years. LTS support for Jessie ended on 30th June 2020.

A limited subset of Jessie will now move to ELTS - Extended Long Term Support and another two years projected support.

Neither LTS nor ELTS are supported  any longer by the main Debian folks: instead, they are supported on a commercial basis by a group of Debian volunteers and companies, coordinated by a company led by Raphael Hertzog.

Debian releases are fully supported by the Debian project for two years after the release of the next version.  Today is the final release of Stretch by Debian to incorporate security fixes and so on up to the handover to LTS.

if you are currently running Stretch: you do not need the new CD images. Apt / apt-get update will supply you the updates up until today. Hereafter, Stretch will be supported only as LTS - see LTS


Planet DebianDima Kogan: Converting images while extracting a tarball

Last week at the lab I received a data dump: a gzip-compressed tarball with lots of images in it. The images are all uncompressed .pgm, with the whole tarball weighing in at ~ 1TB. I tried to extract it, and after chugging all day, it ran out of disk space. Added more disk, tried again: out of space again. Just getting a listing of the archive contents (tar tvfz) took something like 8 hours.

Clearly this is unreasonable. I made an executive decision to use .jpg files instead: I'd take the small image quality hit for the massive gains in storage efficiency. But the tarball has .pgm and just extracting the thing is challenging. So I'm now extracting the archive, and converting all the .pgm images to .jpg as soon as they hit disk. How? Glad you asked!

I'm running two parallel terminal sessions (I'm using screen, but you can do whatever you like).

Session 1

< archive.tar.gz unpigz -p20 | tar xv

Here I'm just extracting the archive to disk normally. Using unpigz instead of plain, old tar to get parallelization.

Session 2

inotifywait -r PATH -e close_write -m | mawk -Winteractive '/pgm$/ { print $1$3 }' | parallel -v -j10 'convert {} -quality 96 {.}.jpg && rm {}'

This is the secret sauce. I'm using inotifywait to tell me when any file is closed for writing in a subdirectory of PATH. Then I mawk it to only tell me when .pgm files are done being written, then I convert them to .jpg, and delete the .pgm when that's done. I'm using GNU Parallel to parallelize the image conversion. Otherwise the image conversion doesn't keep up.

This is going to take at least all day, but I'm reasonably confident that it will actually finish successfully, and I can then actually do stuff with the data.

Planet DebianDavid Bremner: git-annex and ikiwiki, not as hard as I expected

Background

So apparently there's this pandemic thing, which means I'm teaching "Alternate Delivery" courses now. These are just like online courses, except possibly more synchronous, definitely less polished, and the tuition money doesn't go to the College of Extended Learning. I figure I'll need to manage share videos, and our learning management system, in the immortal words of Marie Kondo, does not bring me joy. This has caused me to revisit the problem of sharing large files in an ikiwiki based site (like the one you are reading).

My goto solution for large file management is git-annex. The last time I looked at this (a decade ago or so?), I was blocked by git-annex using symlinks and ikiwiki ignoring them for security related reasons. Since then two things changed which made things relatively easy.

  1. I started using the rsync_command ikiwiki option to deploy my site.

  2. git-annex went through several design iterations for allowing non-symlink access to large files.

TL;DR

In my ikiwiki config

    # attempt to hardlink source files? (optimisation for large files)
    hardlink => 1,

In my ikiwiki git repo

$ git annex init
$ git annex add foo.jpg
$ git commit -m&aposadd big photo&apos
$ git annex adjust --unlock                 # look ikiwiki, no symlinks
$ ikiwiki --setup ~/.config/ikiwiki/client  # rebuild my local copy, for review
$ ikiwiki --setup /home/bremner/.config/ikiwiki/rsync.setup --refresh  # deploy

You can see the result at photo

Planet DebianAbhijith PA: Workstation setup

Workstation

Hello,

Recently I’ve seen lot of people sharing about their home office setup. I thought why don’t I do something similar. Not to beat FOMO, but in future when I revisit this blog, it will be lovely to understand that I had some cool stuffs.

There are people who went deep down in the ocean to lay cables for me to have a remote job and I am thankful to them.

Being remote my home is my office. On my work table I have a Samsung R439 laptop. I’ve blogged about it earlier. New addition is that it have another 4GB RAM, a total of 6GB and 120GB SSD. I run Debian testing on it. Laptop is placed on a stand. Dell MS116 as external mouse always connected to it. I also use an external keyboard from fingers. The keys are very stiff so I don’t recommend this to anyone. The only reason I took this keyboard that it is in my budget and have a backlit, which I needed most.

I have a Micromax MM215FH76 21 inch monitor as my secondary display which stacked up on couple of old books to adjust the height with laptop stand. Everything is ok with this monitor except that it don’t have a HDMI point and stand is very weak. I use i3wm and this small script help me to manage my monitor arrangement.

# samsung r439
xrandr --output LVDS1 --primary --mode 1366x768 --pos 1920x312 --rotate normal --output DP1 --off --output HDMI1 --off --output VGA1 --mode 1920x1080 --pos 0x0 --rotate normal --output VIRTUAL1 --off
# thinkpad t430s
#xrandr --output LVDS1 --primary --mode 1600x900 --pos 1920x180 --rotate normal --output DP1 --off --output DP2 --off --output DP3 --off --output HDMI1 --off --output HDMI2 --off --output HDMI3 --off --output VGA1 --mode 1920x1080 --pos 0x0 --rotate normal --output VIRTUAL1 --off
i3-msg workspace 2, move workspace to left
i3-msg workspace 4, move workspace to left
i3-msg workspace 6, move workspace to left

I also have another Viewsonic monitor 19 inch, it started to show some lines and unpleasant colors. Thus moved back to shelf.

I have an orange pi zero plus 2 running Armbian which serve as my emby media server. I don’t own any webcam or quality headset at the moment. I have a boat, and Mi, headphones. My laptop inbuilt webcam is horrible, so for my video conferencing need I use jitsi app on my mobile device.

Planet DebianReproducible Builds (diffoscope): diffoscope 152 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 152. This version includes the following changes:

[ Chris Lamb ]

* Bug fixes:

  - Don't require zipnote(1) to determine differences in a .zip file as we
    can use libarchive directly.

Reporting improvements:

  - Don't emit "javap not found in path" if it is available in the path but
    it did not result in any actual difference.
  - Fix "... not available in path" messages when looking for Java
    decompilers; we were using the Python class name (eg. "<class
    'diffoscope.comparators.java.Javap'>") over the actual command we looked
    for (eg. "javap").

* Code improvements:

  - Replace some simple usages of str.format with f-strings.
  - Tidy inline imports in diffoscope.logging.
  - In the RData comparator, always explicitly return a None value in the
    failure cases as we return a non-None value in the "success" one.

[ Jean-Romain Garnier ]
* Improve output of side-by-side diffs, detecting added lines better.
  (MR: reproducible-builds/diffoscope!64)
* Allow passing file with list of arguments to ArgumentParser (eg.
  "diffoscope @args.txt"). (MR: reproducible-builds/diffoscope!62)

You find out more by visiting the project homepage.

,

CryptogramFriday Squid Blogging: Squid Found on Provincetown Sandbar

Headline: "Dozens of squid found on Provincetown sandbar." Slow news day.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.900.2.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 757 other packages on CRAN.

Conrad just released a new minor upstream version 9.900.2 of Armadillo which we packaged and tested as usual first as a ‘release candidate’ build and then as the release. As usual, logs from reverse-depends runs are in the rcpp-logs repo.

All changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.900.2.0 (2020-07-17)

  • Upgraded to Armadillo release 9.900.2 (Nocturnal Misbehaviour)

    • In sort(), fixes for inconsistencies between checks applied to matrix and vector expressions

    • In sort(), remove unnecessary copying when applied in-place to vectors function when applied in-place to vectors

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSven Hoexter: Debian and exFAT

As some might have noticed we now have Linux 5.7 in Debian/unstable and subsequently the in kernel exFAT implementation created by Samsung available. Thus we now have two exFAT implementations, the exfat fuse driver and the Linux based one. Since some comments and mails I received showed minor confusions, especially around the available tooling, it might help to clarify a bit what is required when.

Using the Samsung Linux Kernel Implementation

Probably the most common use case is that you just want to use the in kernel implementation. Easy, install a Linux 5.7 kernel package for your architecture and either remove the exfat-fuse package or make sure you've version 1.3.0-2 or later installed. Then you can just run mount /dev/sdX /mnt and everything should be fine.

Your result will look something like this:

$ mount|grep sdb
/dev/sdb on /mnt type exfat (rw,relatime,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro

In the past this basic mount invocation utilized the mount.exfat helper, which was just a symlink to the helper which is shipped as /sbin/mount.exfat-fuse. The link was dropped from the package in 1.3.0-2. If you're running a not so standard setup, and would like to keep an older version of exfat-fuse installed you must invoke mount -i to prevent mount from loading any helper to mount the filesystem. See also man 8 mount.

For those who care, mstone@ and myself had a brief discussion about this issue in #963752, which quickly brought me to the conclusion that it's in the best interest of a majority of users to just drop the symlink from the package.

Sticking to the Fuse Driver

If you would like to stick to the fuse driver you can of course just do it. I plan to continue to maintain all packages for the moment. Just keep the exfat-fuse package installed and use the mount.exfat-fuse helper directly. E.g.

$ sudo mount.exfat-fuse /dev/sdb /mnt
FUSE exfat 1.3.0
$ mount|grep sdb
/dev/sdb on /mnt type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)

In case this is something you would like to make permanent, I would recommend that you create yourself a mount.exfat symlink pointing at the mount.exfat-fuse helper.

mkfs and fsck - exfat-utils vs exfatprogs

Beside of the filesystem access itself we now also have two implementations of tooling, which support filesystem creation, fscking and label adjustment. The older one is exfat-utils by the creator of the fuse driver, which is also part of Debian since the fuse driver was packaged in 2012. New in Debian is the exfatprogs package written by the Samsung engineers. And here the situation starts to get a bit more complicated.

Both packages ship a mkfs.exfat and fsck.exfat tool, so we can not co-install them. In the end both packages declare a conflict with each other at the moment. As outlined in this thread I do not plan to overcomplicate the situation by using the alternatives system. I do feel strongly that this would just create more confusion without a real benefit. Since the tools do not have matching cli options, that could also cause additional issues and confusion.

I plan to keep that as is, at least for the bullseye release. Afterwards it's possible, depending on how the usage evolves, to drop the mkfs.exfat and fsck.exfat from exfat-utils, they are in fact again only symlinks. pain point might be tools interfacing with the differing implementations. Currently I see only three reverse depedencies, so that should be manageable to consolidate if required.

Last but not least it might be relevant to mention that the exfat-utils package also contains a dumpexfat tool which could be helpful if you're more into forensics, or looking into other lower level analysis of an exFAT filsystem. Thus there is a bit of an interest to have those tools co-installed in some - I would say - niche cases.

buster-backports

Well if you use buster with a backports kernel you're a bit on your own. In case you want to keep the fuse driver installed, but would still like to mount, e.g. for testing, with the kernel exFAT driver, you must use mount -i. I do not plan any uploads to buster-backports. If you need a mkfs.exfat on buster, I would recommend to just use the one from exfat-utils for now. It has been good enough for the past years, should not get sour before the bullseye release, which ships exfatprogs for you.

Kudos

My sincere kudos go to:

  • Andrew Nayenko who wrote the exFAT fuse implemetation which was very helpful to many people for the past years. He's a great upstream to work with.
  • Namjae Jeon and Hyunchul Lee who maintain the Linux exFAT driver and exfatprogs. They are also very responsive upstreams and easy to work with.
  • Last but not least our ftp-master who reviewed the exfatprogs package way faster than what I had anticipated looking at the current NEW backlog.

CryptogramTwitter Hackers May Have Bribed an Insider

Motherboard is reporting that this week's Twitter hack involved a bribed insider. Twitter has denied it.

I have been taking press calls all day about this. And while I know everyone wants to speculate about the details of the hack, we just don't know -- and probably won't for a couple of weeks.

Dave HallIf You’re not Using YAML for CloudFormation Templates, You’re Doing it Wrong

In my last blog post, I promised a rant about using YAML for CloudFormation templates. Here it is. If you persevere to the end I’ll also show you have to convert your existing JSON based templates to YAML.

Many of the points I raise below don’t just apply to CloudFormation. They are general comments about why you should use YAML over JSON for configuration when you have a choice.

One criticism of YAML is its reliance on indentation. A lot of the code I write these days is Python, so indentation being significant is normal. Use a decent editor or IDE and this isn’t a problem. It doesn’t matter if you’re using JSON or YAML, you will want to validate and lint your files anyway. How else will you find that trailing comma in your JSON object?

Now we’ve got that out of the way, let me try to convince you to use YAML.

As developers we are regularly told that we need to document our code. CloudFormation is Infrastructure as Code. If it is code, then we need to document it. That starts with the Description property at the top of the file. If you JSON for your templates, that’s it, you have no other opportunity to document your templates. On the other hand, if you use YAML you can add inline comments. Anywhere you need a comment, drop in a hash # and your comment. Your team mates will thank you.

JSON templates don’t support multiline strings. These days many developers have 4K or ultra wide monitors, we don’t want a string that spans the full width of our 34” screen. Text becomes harder to read once you exceed that “90ish” character limit. With JSON your multiline string becomes "[90ish-characters]\n[another-90ish-characters]\n[and-so-on"]. If you opt for YAML, you can use the greater than symbol (>) and then start your multiline comment like so:

Description: >
  This is the first line of my Description
  and it continues on my second line
  and I'll finish it on my third line.

As you can see it much easier to work with multiline string in YAML than JSON.

“Folded blocks” like the one above are created using the > replace new lines with spaces. This allows you to format your text in a more readable format, but allow a machine to use it as intended. If you want to preserve the new line, use the pipe (|) to create a “literal block”. This is great for an inline Lambda functions where the code remains readable and maintainable.

  APIFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile: |
          import json
          import random


          def lambda_handler(event, context):
              return {"statusCode": 200, "body": json.dumps({"value": random.random()})}
      FunctionName: "GetRandom"
      Handler: "index.lambda_handler"
      MemorySize: 128
      Role: !GetAtt LambdaServiceRole.Arn
      Runtime: "python3.7"
		Timeout: 5

Both JSON and YAML require you to escape multibyte characters. That’s less of an issue with CloudFormation templates as generally you’re only using the ASCII character set.

In a YAML file generally you don’t need to quote your strings, but in JSON double quotes are used every where, keys, string values and so on. If your string contains a quote you need to escape it. The same goes for tabs, new lines, backslashes and and so on. JSON based CloudFormation templates can be hard to read because of all the escaping. It also makes it harder to handcraft your JSON when your code is a long escaped string on a single line.

Some configuration in CloudFormation can only be expressed as JSON. Step Functions and some of the AppSync objects in CloudFormation only allow inline JSON configuration. You can still use a YAML template and it is easier if you do when working with these objects.

The JSON only configuration needs to be inlined in your template. If you’re using JSON you have to supply this as an escaped string, rather than nested objects. If you’re using YAML you can inline it as a literal block. Both YAML and JSON templates support functions such as Sub being applied to these strings, it is so much more readable with YAML. See this Step Function example lifted from the AWS documentation:

MyStateMachine:
  Type: "AWS::StepFunctions::StateMachine"
  Properties:
    DefinitionString:
      !Sub |
        {
          "Comment": "A simple AWS Step Functions state machine that automates a call center support session.",
          "StartAt": "Open Case",
          "States": {
            "Open Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:open_case",
              "Next": "Assign Case"
            }, 
            "Assign Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:assign_case",
              "Next": "Work on Case"
            },
            "Work on Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:work_on_case",
              "Next": "Is Case Resolved"
            },
            "Is Case Resolved": {
                "Type" : "Choice",
                "Choices": [ 
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 1,
                    "Next": "Close Case"
                  },
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 0,
                    "Next": "Escalate Case"
                  }
              ]
            },
             "Close Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:close_case",
              "End": true
            },
            "Escalate Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:escalate_case",
              "Next": "Fail"
            },
            "Fail": {
              "Type": "Fail",
              "Cause": "Engage Tier 2 Support."    }   
          }
        }

If you’re feeling lazy you can use inline JSON for IAM policies that you’ve copied from elsewhere. It’s quicker than converting them to YAML.

YAML templates are smaller and more compact than the same configuration stored in a JSON based template. Smaller yet more readable is winning all round in my book.

If you’re still not convinced that you should use YAML for your CloudFormation templates, go read Amazon’s blog post from 2017 advocating the use of YAML based templates.

Amazon makes it easy to convert your existing templates from JSON to YAML. cfn-flip is aPython based AWS Labs tool for converting CloudFormation templates between JSON and YAML. I will assume you’ve already installed cfn-flip. Once you’ve done that, converting your templates with some automated cleanups is just a command away:

cfn-flip --clean template.json template.yaml

git rm the old json file, git add the new one and git commit and git push your changes. Now you’re all set for your new life using YAML based CloudFormation templates.

If you want to learn more about YAML files in general, I recommend you check our Learn X in Y Minutes’ Guide to YAML. If you want to learn more about YAML based CloudFormation templates, check Amazon’s Guide to CloudFormation Templates.

LongNowLong Now partners with Avenues: The World School for year-long, online program on the future of invention

The best way to predict the future is to invent it.” – Alan Kay

The Long Now Foundation has partnered with Avenues: The World School to offer a program on the past, present, and future of innovation. A fully online program for ages 17 and above, the Avenues Mastery Year is designed to equip aspiring inventors with the ability to: 

  • Conceive original ideas and translate those ideas into inventions through design and prototyping, 
  • Communicate the impact of the invention with an effective pitch deck and business plan, 
  • Ultimately file for and receive patent pending status with the United States Patent and Trademark Office. 

Applicants select a concentration in either Making and Design or Future Sustainability

Participants will hack, reverse engineer, and re-invent a series of world-changing technologies such as the smartphone, bioplastics, and the photovoltaic cell, all while immersing themselves in curated readings about the origins and possible trajectories of great inventions. 

The Long Now Foundation will host monthly fireside chats for participants where special guests offer feedback, spark new ideas and insights, and share advice and wisdom. Confirmed guests include Kim Polese (Long Now Board Member), Alexander Rose (Long Now Executive Director and Board Member), Samo Burja (Long Now Research Fellow), Jason Crawford (Roots of Progress), and Nick Pinkston (Volition). Additional guests from the Long Now Board and community are being finalized over the coming weeks.

The goal of Avenues Mastery Year is to equip aspiring inventors with the technical skills and long-term perspective needed to envision and invent the future. Visit Avenues Mastery Year to learn more, or get in touch directly by writing to ama@avenues.org.

Worse Than FailureError'd: Not Applicable

"Why yes, I have always pictured myself as not applicable," Olivia T. wrote.

 

"Hey Amazon, now I'm no doctor, but you may need to reconsider your 'Choice' of Acetaminophen as a 'stool softener'," writes Peter.

 

Ivan K. wrote, "Initially, I balked at the price of my new broadband plan, but the speed is just so good that sometimes it's so fast that the reply packets arrive before I even send a request!"

 

"I wanted to check if a site was being slow and, well, I figured it was good time to go read a book," Tero P. writes.

 

Robin L. writes, "I just can't wait to try Edge!"

 

"Yeah, one car stays in the garage, the other is out there tailgating Starman," Keith wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianDima Kogan: Visualizing a broken internet connection

We've all seen our ISPs crap out periodically. Things feel slow, we then go ping some server, and see lost packets or slow response times. When bitching to the ISP it's useful to have evidence so you can clearly explain exactly how they are fucking up. This happened to me again last week, so I wrote a quick oneliner to do the visualization. Here it is:

( echo '# i dt'; ping 8.8.8.8 | perl -ne 'BEGIN { $|=1} next if /PING/; s/.*seq=([0-9]+).*time=([0-9]+).*/\1 \2/ && print') | tee /tmp/ping.vnl | vnl-filter --stream -p i,dt,di='diff(i)' | vnl-filter --stream --has di | feedgnuplot  --domain --stream --with 'linespoints pt 7 palette' --tuplesizeall 3 --ymin 0 --set 'cbrange [1:]' --xlabel "ping index" --ylabel "Response time (ms)" --title "ping 8.8.8.8 response times. Consecutive indices shown as colors"

You run that, and you get a realtime-updating plot of ping times and missed packets. The one-liner also logs the data to a file, so the saved data can be re-visualized. Showing what happens on my currently-working internet connection:

< /tmp/ping.vnl vnl-filter -p i,dt,di='diff(i)' | vnl-filter --has di | feedgnuplot  --domain --with 'linespoints pt 7 palette' --tuplesizeall 3 --ymin 0 --set 'cbrange [1:]' --xlabel "ping index" --ylabel "Response time (ms)" --title "ping 8.8.8.8 response times. Consecutive indices shown as colors" --hardcopy /tmp/isp-good.svg

good.svg

On a broken network it looks like this (I edited the log to show the kind of broken-ness I was seeing earlier):

bad.svg

The x axis is the ping index. The y axis is the response time. Misses responses are shows as gaps in the points and also as colors, for legibility.

As usual, this uses the vnlog and feedgnuplot tools, so

sudo apt install feedgnuplot vnlog

if you want to try this

Planet DebianLouis-Philippe Véronneau: DebConf Videoteam Sprint Report -- DebConf20@Home

DebConf20 starts in about 5 weeks, and as always, the DebConf Videoteam is working hard to make sure it'll be a success. As such, we held a sprint from July 9th to 13th to work on our new infrastructure.

A remote sprint certainly ain't as fun as an in-person one, but we nonetheless managed to enjoy ourselves. Many thanks to those who participated, namely:

  • Carl Karsten (CarlFK)
  • Ivo De Decker (ivodd)
  • Kyle Robbertze (paddatrapper)
  • Louis-Philippe Véronneau (pollo)
  • Nattie Mayer-Hutchings (nattie)
  • Nicolas Dandrimont (olasd)
  • Stefano Rivera (tumbleweed)
  • Wouter Verhelst (wouter)

We also wish to extend our thanks to Thomas Goirand and Infomaniak for providing us with virtual machines to experiment on and host the video infrastructure for DebConf20.

Advice for presenters

For DebConf20, we strongly encourage presenters to record their talks in advance and send us the resulting video. We understand this is more work, but we think it'll make for a more agreeable conference for everyone. Video conferencing is still pretty wonky and there is nothing worse than a talk ruined by a flaky internet connection or hardware failures.

As such, if you are giving a talk at DebConf this year, we are asking you to read and follow our guide on how to record your presentation.

Fear not: we are not getting rid of the Q&A period at the end of talks. Attendees will ask their questions — either on IRC or on a collaborative pad — and the Talkmeister will relay them to the speaker once the pre-recorded video has finished playing.

New infrastructure, who dis?

Organising a virtual DebConf implies migrating from our battle-tested on-premise workflow to a completely new remote one.

One of the major changes this means for us is the addition of Jitsi Meet to our infrastructure. We normally have 3 different video sources in a room: two cameras and a slides grabber. With the new online workflow, directors will be able to play pre-recorded videos as a source, will get a feed from a Jitsi room and will see the audience questions as a third source.

This might seem simple at first, but is in fact a very major change to our workflow and required a lot of work to implement.

               == On-premise ==                      ||                   == Online ==
                                                     ||
              Camera 1                               ||                 Jitsi
                 |                                   ||                   |
                 v                 ---> Frontend     ||                   v                 ---> Frontend
                                   |                 ||                                     |
    Slides -> Voctomix -> Backend -+--> Frontend     ||   Questions -> Voctomix -> Backend -+--> Frontend
                                   |                 ||                                     |
                 ^                 ---> Frontend     ||                   ^                 ---> Frontend
                 |                                   ||                   |
              Camera 2                               ||           Pre-recorded video

In our tests, playing back pre-recorded videos to voctomix worked well, but was sometimes unreliable due to inconsistent encoding settings. Presenters will thus upload their pre-recorded talks to SReview so we can make sure there aren't any obvious errors. Videos will then be re-encoded to ensure a consistent encoding and to normalise audio levels.

This process will also let us stitch the Q&As at the end of the pre-recorded videos more easily prior to publication.

Reducing the stream latency

One of the pitfalls of the streaming infrastructure we have been using since 2016 is high video latency. In a worst case scenario, remote attendees could get up to 45 seconds of latency, making participation in events like BoFs arduous.

In preparation for DebConf20, we added a new way to stream our talks: RTMP. Attendees will thus have the option of using either an HLS stream with higher latency or an RTMP stream with lower latency.

Here is a comparative table that can help you decide between the two protocols:

HLS RTMP
Pros
  • Can be watched from a browser
  • Auto-selects a stream encoding
  • Single URL to remember
  • Lower latency (~5s)
Cons
  • Higher latency (up to 45s)
  • Requires a dedicated video player (VLC, mpv)
  • Specific URLs for each encoding setting

Live mixing from home with VoctoWeb

Since DebConf16, we have been using voctomix, a live video mixer developed by the CCC VOC. voctomix is conveniently divided in two: voctocore is the backend server while voctogui is a GTK+ UI frontend directors can use to live-mix.

Although voctogui can connect to a remote server, it was primarily designed to run either on the same machine as voctocore or on the same LAN. Trying to use voctogui from a machine at home to connect to a voctocore running in a datacenter proved unreliable, especially for high-latency and low bandwidth connections.

Inspired by the setup FOSDEM uses, we instead decided to go with a web frontend for voctocore. We initially used FOSDEM's code as a proof of concept, but quickly reimplemented it in Python, a language we are more familiar with as a team.

Compared to the FOSDEM PHP implementation, voctoweb implements A / B source selection (akin to voctogui) as well as audio control, two very useful features. In the following screen captures, you can see the old PHP UI on the left and the new shiny Python one on the right.

The old PHP voctowebThe new Python3 voctoweb

Voctoweb is still under development and is likely to change quite a bit until DebConf20. Still, the current version seems to works well enough to be used in production if you ever need to.

Python GeoIP redirector

We run multiple geographically-distributed streaming frontend servers to minimize the load on our streaming backend and to reduce overall latency. Although users can connect to the frontends directly, we typically point them to live.debconf.org and redirect connections to the nearest server.

Sadly, 6 months ago MaxMind decided to change the licence on their GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano Rivera wrote a Python program that uses the new database and reworked our ansible frontend server role. Since the new database cannot be redistributed freely, you'll have to get a (free) license key from MaxMind if you to use this role.

Ansible & CI improvements

Infrastructure as code is a living process and needs constant care to fix bugs, follow changes in DSL and to implement new features. All that to say a large part of the sprint was spent making our ansible roles and continuous integration setup more reliable, less buggy and more featureful.

All in all, we merged 26 separate ansible-related merge request during the sprint! As always, if you are good with ansible and wish to help, we accept merge requests on our ansible repository :)

Krebs on SecurityWho’s Behind Wednesday’s Epic Twitter Hack?

Twitter was thrown into chaos on Wednesday after accounts for some of the world’s most recognizable public figures, executives and celebrities starting tweeting out links to bitcoin scams. Twitter says the attack happened because someone tricked or coerced an employee into providing access to internal Twitter administrative tools. This post is an attempt to lay out some of the timeline of the attack, and point to clues about who may have been behind it.

The first public signs of the intrusion came around 3 PM EDT, when the Twitter account for the cryptocurrency exchange Binance tweeted a message saying it had partnered with “CryptoForHealth” to give back 5000 bitcoin to the community, with a link where people could donate or send money.

Minutes after that, similar tweets went out from the accounts of other cryptocurrency exchanges, and from the Twitter accounts for democratic presidential candidate Joe Biden, Amazon CEO Jeff Bezos, President Barack Obama, Tesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

While it may sound ridiculous that anyone would be fooled into sending bitcoin in response to these tweets, an analysis of the BTC wallet promoted by many of the hacked Twitter profiles shows that over the past 24 hours the account has processed 383 transactions and received almost 13 bitcoin — or approximately USD $117,000.

Twitter issued a statement saying it detected “a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools. We know they used this access to take control of many highly-visible (including verified) accounts and Tweet on their behalf. We’re looking into what other malicious activity they may have conducted or information they may have accessed and will share more here as we have it.”

There are strong indications that this attack was perpetrated by individuals who’ve traditionally specialized in hijacking social media accounts via “SIM swapping,” an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

People within the SIM swapping community are obsessed with hijacking so-called “OG” social media accounts. Short for “original gangster,” OG accounts typically are those with short profile names (such as @B or @joe). Possession of these OG accounts confers a measure of status and perceived influence and wealth in SIM swapping circles, as such accounts can often fetch thousands of dollars when resold in the underground.

In the days leading up to Wednesday’s attack on Twitter, there were signs that some actors in the SIM swapping community were selling the ability to change an email address tied to any Twitter account. In a post on OGusers — a forum dedicated to account hijacking — a user named “Chaewon” advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

“This is NOT a method, you will be given a full refund if for any reason you aren’t given the email/@, however if it is revered/suspended I will not be held accountable,” Chaewon wrote in their sales thread, which was titled “Pulling email for any Twitter/Taking Requests.”

Hours before any of the Twitter accounts for cryptocurrency platforms or public figures began blasting out bitcoin scams on Wednesday, the attackers appear to have focused their attention on hijacking a handful of OG accounts, including “@6.

That Twitter account was formerly owned by Adrian Lamo — the now-deceased “homeless hacker” perhaps best known for breaking into the New York Times’s network and for reporting Chelsea Manning‘s theft of classified documents. @6 is now controlled by Lamo’s longtime friend, a security researcher and phone phreaker who asked to be identified in this story only by his Twitter nickname, “Lucky225.”

Lucky225 said that just before 2 p.m. EDT on Wednesday, he received a password reset confirmation code via Google Voice for the @6 Twitter account. Lucky said he’d previously disabled SMS notifications as a means of receiving multi-factor codes from Twitter, opting instead to have one-time codes generated by a mobile authentication app.

But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication, the one-time authentication code was sent to both his Google Voice account and to the new email address added by the attackers.

“The way the attack worked was that within Twitter’s admin tools, apparently you can update the email address of any Twitter user, and it does this without sending any kind of notification to the user,” Lucky told KrebsOnSecurity. “So [the attackers] could avoid detection by updating the email address on the account first, and then turning off 2FA.”

Lucky said he hasn’t been able to review whether any tweets were sent from his account during the time it was hijacked because he still doesn’t have access to it (he has put together a breakdown of the entire episode at this Medium post).

But around the same time @6 was hijacked, another OG account – @B — was swiped. Someone then began tweeting out pictures of Twitter’s internal tools panel showing the @B account.

A screenshot of the hijacked OG Twitter account “@B,” shows the hijackers logged in to Twitter’s internal account tools interface.

Twitter responded by removing any tweets across its platform that included screenshots of its internal tools, and in some cases temporarily suspended the ability of those accounts to tweet further.

Another Twitter account — @shinji — also was tweeting out screenshots of Twitter’s internal tools. Minutes before Twitter terminated the @shinji account, it was seen publishing a tweet saying “follow @6,” referring to the account hijacked from Lucky225.

The account “@shinji” tweeting a screenshot of Twitter’s internal tools interface.

Cached copies of @Shinji’s tweets prior to Wednesday’s attack on Twitter are available here and here from the Internet Archive. Those caches show Shinji claims ownership of two OG accounts on Instagram — “j0e” and “dead.”

KrebsOnSecurity heard from a source who works in security at one of the largest U.S.-based mobile carriers, who said the “j0e” and “dead” Instagram accounts are tied to a notorious SIM swapper who goes by the nickname “PlugWalkJoe.” Investigators have been tracking PlugWalkJoe because he is thought to have been involved in multiple SIM swapping attacks over the years that preceded high-dollar bitcoin heists.

Archived copies of the @Shinji account on twitter shows one of Joe’s OG Instagram accounts, “Dead.”

Now look at the profile image in the other Archive.org index of the @shinji Twitter account (pictured below). It is the same image as the one included in the @Shinji screenshot above from Wednesday in which Joseph/@Shinji was tweeting out pictures of Twitter’s internal tools.

Image: Archive.org

This individual, the source said, was a key participant in a group of SIM swappers that adopted the nickname “ChucklingSquad,” and was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As Wired.com recounted, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

A tweet sent out from Twitter CEO Jack Dorsey’s account while it was hijacked shouted out to PlugWalkJoe and other Chuckling Squad members.

The mobile industry security source told KrebsOnSecurity that PlugWalkJoe in real life is a 21-year-old from Liverpool, U.K. named Joseph James O’Connor. The source said PlugWalkJoe is in Spain where he was attending a university until earlier this year. He added that PlugWalkJoe has been unable to return home on account of travel restrictions due to the COVID-19 pandemic.

The mobile industry source said PlugWalkJoe was the subject of an investigation in which a female investigator was hired to strike up a conversation with PlugWalkJoe and convince him to agree to a video chat. The source further explained that a video which they recorded of that chat showed a distinctive swimming pool in the background.

According to that same source, the pool pictured on PlugWalkJoe’s Instagram account (instagram.com/j0e) is the same one they saw in their video chat with him.

If PlugWalkJoe was in fact pivotal to this Twitter compromise, it’s perhaps fitting that he was identified in part via social engineering. Maybe we should all be grateful the perpetrators of this attack on Twitter did not set their sights on more ambitious aims, such as disrupting an election or the stock market, or attempting to start a war by issuing false, inflammatory tweets from world leaders.

Also, it seems clear that this Twitter hack could have let the attackers view the direct messages of anyone on Twitter, information that is difficult to put a price on but which nevertheless would be of great interest to a variety of parties, from nation states to corporate spies and blackmailers.

This is a fast-moving story. There were multiple people involved in the Twitter heist. Please stay tuned for further updates. KrebsOnSecurity would like to thank Unit 221B for their assistance in connecting some of the dots in this story.

Planet DebianEnrico Zini: Build Qt5 cross-builder with raspbian sysroot: compiling with the sysroot

Whack-A-Mole machines from <https://commons.wikimedia.org/wiki/File:Whac-A-Mole_Cedar_Point.jpg>

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

Now that I have a sysroot, I try to use it to build Qt5 with QtWebEngine.

Nothing seems to work straightforwardly with Qt5's build system, and hit an endless series of significant blockers to try and work around.


Problem in wayland code

QtWayland's source currently does not compile:

../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp: In constructor ‘QtWaylandClient::QWaylandBrcmEglWindow::QWaylandBrcmEglWindow(QWindow*)’:
../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp:131:67: error: no matching function for call to ‘QtWaylandClient::QWaylandWindow::QWaylandWindow(QWindow*&)’
     , m_eventQueue(wl_display_create_queue(mDisplay->wl_display()))
                                                                   ^
In file included from ../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/qwaylandwindow_p.h:1,
                 from ../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.h:43,
                 from ../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp:40:
../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/../../../../../src/client/qwaylandwindow_p.h:97:5: note: candidate: ‘QtWaylandClient::QWaylandWindow::QWaylandWindow(QWindow*, QtWayland
Client::QWaylandDisplay*)’
     QWaylandWindow(QWindow *window, QWaylandDisplay *display);
     ^~~~~~~~~~~~~~
../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/../../../../../src/client/qwaylandwindow_p.h:97:5: note:   candidate expects 2 arguments, 1 provided
make[5]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/qdoc'

I am not trying to debug here. I understand that Wayland support is not a requirement, and I'm adding -skip wayland to Qt5's configure options.

Next round.

nss not found

Qt5 embeds Chrome's sources. Chrome's sources require libnss3-dev to be available for both host and target architectures. Although I now have it installed both on the build system and in the sysroot, the pkg-config wrapper that Qt5 hooks into its Chrome's sources, failes to find it:

Command: /usr/bin/python2 /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py -s /home/build/sysroot/ -a arm -p /usr/bin/arm-linux-gnueabihf-pkg-config --system_libdir lib nss -v -lssl3
Returned 1.
stderr:

Package nss was not found in the pkg-config search path.
Perhaps you should add the directory containing `nss.pc'
to the PKG_CONFIG_PATH environment variable
No package 'nss' found
Traceback (most recent call last):
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 248, in <module>
    sys.exit(main())
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 143, in main
    prefix = GetPkgConfigPrefixToStrip(options, args)
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 82, in GetPkgConfigPrefixToStrip
    "--variable=prefix"] + args, env=os.environ).decode('utf-8')
  File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['/usr/bin/arm-linux-gnueabihf-pkg-config', '--variable=prefix', 'nss']' returned non-zero exit status 1

See //build/config/linux/nss/BUILD.gn:15:3: whence it was called.
  pkg_config("system_nss_no_ssl_config") {
  ^---------------------------------------
See //crypto/BUILD.gn:218:25: which caused the file to be included.
    public_configs += [ "//build/config/linux/nss:system_nss_no_ssl_config" ]
                        ^--------------------------------------------------
Project ERROR: GN run error!

It's trying to look into $SYSROOT/usr/lib/pkgconfig, while it should be $SYSROOT//usr/lib/arm-linux-gnueabihf/pkgconfig.

I worked around this this patch to qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py:

--- pkg-config.py.orig  2020-07-16 11:46:21.005373002 +0200
+++ pkg-config.py   2020-07-16 11:46:02.605296967 +0200
@@ -61,6 +61,7 @@

   libdir = sysroot + '/usr/' + options.system_libdir + '/pkgconfig'
   libdir += ':' + sysroot + '/usr/share/pkgconfig'
+  libdir += ':' + sysroot + '/usr/lib/arm-linux-gnueabihf/pkgconfig'
   os.environ['PKG_CONFIG_LIBDIR'] = libdir
   return libdir

Next round.

g++ 8.3.0 Internal Compiler Error

Qt5's sources embed Chrome's sources that embed the skia library sources.

One of the skia library sources, when cross-compiled to ARM with -O1 or -O2 with g++ 8.3.0, produces an Internal Compiler Error:

/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/skia/skcms/skcms.o.d -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -I../../3rdparty/chromium/third_party/skia/include/third_party/skcms -Igen -I../../3rdparty/chromium -w -std=c11 -mfp16-format=ieee -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -c ../../3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc -o obj/skia/skcms/skcms.o
during RTL pass: expand
In file included from ../../3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc:2053:
../../3rdparty/chromium/third_party/skia/third_party/skcms/src/Transform_inl.h: In function ‘void baseline::exec_ops(const Op*, const void**, const char*, char*, int)’:
../../3rdparty/chromium/third_party/skia/third_party/skcms/src/Transform_inl.h:766:13: internal compiler error: in convert_move, at expr.c:218
 static void exec_ops(const Op* ops, const void** args,
             ^~~~~~~~
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-8/README.Bugs> for instructions.

I reported the bug at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96206

Since this source compiles with -O0, I attempted to fix this by editing qtwebkit/src/3rdparty/chromium/build/config/compiler/BUILD.gn and replacing instances of -O1 and -O2 with -O0.

Spoiler: wrong attempt. We'll see it in the next round.

Impossible constraint in asm

Qt5's sources embed Chrome's sources that embed the ffmpeg library sources. Even if ffmpeg's development libraries are present both in the host and in the target system, the build system insists in compiling and using the bundled version.

Unfortunately, using -O0 breaks the build of ffmpeg:

/usr/bin/arm-linux-gnueabihf-gcc -MMD -MF obj/third_party/ffmpeg/ffmpeg_internal/opus.o.d -DHAVE_AV_CONFIG_H -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 -DPIC -DFFMPEG_CONFIGURATION=NULL -DCHROMIUM_NO_LOGGING -D_ISOC99_SOURCE -D_LARGEFILE_SOURCE -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DOPUS_FIXED_POINT -I../../3rdparty/chromium/third_party/ffmpeg/chromium/config/Chromium/linux/arm -I../../3rdparty/chromium/third_party/ffmpeg -I../../3rdparty/chromium/third_party/ffmpeg/compat/atomics/gcc -Igen -I../../3rdparty/chromium -I../../3rdparty/chromium/third_party/opus/src/include -fPIC -Wno-deprecated-declarations -fomit-frame-pointer -w -std=c99 -pthread -fno-math-errno -fno-signed-zeros -fno-tree-vectorize -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -g0 -fvisibility=hidden -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O0 -fno-ident -fdata-sections -ffunction-sections -std=gnu11 --sysroot=../../../../../../sysroot/ -c ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c -o obj/third_party/ffmpeg/ffmpeg_internal/opus.o
In file included from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/intmath.h:30,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/common.h:106,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/avutil.h:296,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/audio_fifo.h:30,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.h:28,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus_celt.h:29,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c:32:
../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c: In function ‘ff_celt_quant_bands’:
../../3rdparty/chromium/third_party/ffmpeg/libavutil/arm/intmath.h:77:5: error: impossible constraint in ‘asm’
     __asm__ ("usat %0, %2, %1" : "=r"(x) : "r"(a), "i"(p));
     ^~~~~~~

The same source compiles with using -O2 instead of -O0.

I worked around this by undoing the previous change, and limiting -O0 to just the source that causes the Internal Compiler Error.

I edited qtwebengine/src/3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc to prepend:

#pragma GCC push_options
#pragma GCC optimize ("O0")

and append:

#pragma GCC pop_options

Next round.

Missing build-deps for i386 code

Qt5's sources embed Chrome's sources that embed the V8 library sources.

For some reason, torque, that is part of V8, wants to build some of its sources into 32 bit code with -m32, and I did not have i386 cross-compilation libraries installed:

/usr/bin/g++ -MMD -MF v8_snapshot/obj/v8/torque_base/csa-generator.o.d -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DV8_TYPED_ARRAY_MAX_SIZE_IN_HEAP=64 -DENABLE_MINOR_MC -DV8_INTL_SUPPORT -DV8_CONCURRENT_MARKING -DV8_ENABLE_LAZY_SOURCE_POSITIONS -DV8_EMBEDDED_BUILTINS -DV8_SHARED_RO_HEAP -DV8_WIN64_UNWINDING_INFO -DV8_ENABLE_REGEXP_INTERPRETER_THREADED_DISPATCH -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -DV8_TARGET_ARCH_ARM -DCAN_USE_ARMV7_INSTRUCTIONS -DCAN_USE_VFP3_INSTRUCTIONS -DUSE_EABI_HARDFLOAT=1 -DV8_HAVE_TARGET_OS -DV8_TARGET_OS_LINUX -DDISABLE_UNTRUSTED_CODE_MITIGATIONS -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -Iv8_snapshot/gen -I../../3rdparty/chromium -I../../3rdparty/chromium/v8 -Iv8_snapshot/gen/v8 -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -m32 -msse2 -mfpmath=sse -mmmx -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -fno-omit-frame-pointer -g0 -fvisibility=hidden -Wno-strict-overflow -Wno-return-type -O3 -fno-ident -fdata-sections -ffunction-sections -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fvisibility-inlines-hidden -fexceptions -frtti -c ../../3rdparty/chromium/v8/src/torque/csa-generator.cc -o v8_snapshot/obj/v8/torque_base/csa-generator.o
In file included from ../../3rdparty/chromium/v8/src/torque/csa-generator.h:8,
                 from ../../3rdparty/chromium/v8/src/torque/csa-generator.cc:5:
/usr/include/c++/8/iostream:38:10: fatal error: bits/c++config.h: No such file or directory
 #include <bits/c++config.h>
          ^~~~~~~~~~~~~~~~~~
compilation terminated.

New build dependencies needed:

apt install lib32stdc++-8-dev
apt install libc6-dev-i386
dpkg --add-architecture i386
apt install linux-libc-dev:i386

Next round.

OpenGL build issues

Next bump are OpenGL related compiler issues:

/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/QtWebEngineCore/gl_ozone_glx_qt.o.d -DCHROMIUM_VERSION=\"80.0.3987.163\" -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DQT_NO_LINKED_LIST -DQT_NO_KEYWORDS -DQT_USE_QSTRINGBUILDER -DQ_FORWARD_DECLARE_OBJC_CLASS=QT_FORWARD_DECLARE_CLASS -DQTWEBENGINECORE_VERSION_STR=\"5.15.0\" -DQTWEBENGINEPROCESS_NAME=\"QtWebEngineProcess\" -DBUILDING_CHROMIUM -DQTWEBENGINE_EMBEDDED_SWITCHES -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_QUICK_LIB -DQT_GUI_LIB -DQT_QMLMODELS_LIB -DQT_WEBCHANNEL_LIB -DQT_QML_LIB -DQT_NETWORK_LIB -DQT_POSITIONING_LIB -DQT_CORE_LIB -DQT_WEBENGINECOREHEADERS_LIB -DVK_NO_PROTOTYPES -DGL_GLEXT_PROTOTYPES -DUSE_GLX -DUSE_EGL -DGOOGLE_PROTOBUF_NO_RTTI -DGOOGLE_PROTOBUF_NO_STATIC_INITIALIZER -DHAVE_PTHREAD -DU_USING_ICU_NAMESPACE=0 -DU_ENABLE_DYLOAD=0 -DUSE_CHROMIUM_ICU=1 -DU_STATIC_IMPLEMENTATION -DICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE -DUCHAR_TYPE=uint16_t -DWEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0 -DWEBRTC_CHROMIUM_BUILD -DWEBRTC_POSIX -DWEBRTC_LINUX -DABSL_ALLOCATOR_NOTHROW=1 -DWEBRTC_USE_BUILTIN_ISAC_FIX=1 -DWEBRTC_USE_BUILTIN_ISAC_FLOAT=0 -DHAVE_SCTP -DNO_MAIN_THREAD_WRAPPING -DSK_HAS_PNG_LIBRARY -DSK_HAS_WEBP_LIBRARY -DSK_USER_CONFIG_HEADER=\"../../skia/config/SkUserConfig.h\" -DSK_GL -DSK_HAS_JPEG_LIBRARY -DSK_USE_LIBGIFCODEC -DSK_VULKAN_HEADER=\"../../skia/config/SkVulkanConfig.h\" -DSK_VULKAN=1 -DSK_SUPPORT_GPU=1 -DSK_GPU_WORKAROUNDS_HEADER=\"gpu/config/gpu_driver_bug_workaround_autogen.h\" -DVK_NO_PROTOTYPES -DLEVELDB_PLATFORM_CHROMIUM=1 -DLEVELDB_PLATFORM_CHROMIUM=1 -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -I../../3rdparty/chromium/skia/config -I../../3rdparty/chromium/third_party -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/skia/include/core -Igen -I../../3rdparty/chromium -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/api -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include/QtWebChannel -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtNetwork -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include/QtPositioning -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0/QtWebEngineCore -I.moc -I/home/build/sysroot/opt/vc/include -I/home/build/sysroot/opt/vc/include/interface/vcos/pthreads -I/home/build/sysroot/opt/vc/include/interface/vmcs_host/linux -Igen/.moc -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/mkspecs/devices/linux-rasp-pi2-g++ -Igen -Igen -I../../3rdparty/chromium/third_party/libyuv/include -Igen -I../../3rdparty/chromium/third_party/jsoncpp/source/include -I../../3rdparty/chromium/third_party/jsoncpp/generated -Igen -Igen -I../../3rdparty/chromium/third_party/khronos -I../../3rdparty/chromium/gpu -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/perfetto/include -Igen/third_party/perfetto/build_config -Igen -Igen -Igen/third_party/dawn/src/include -I../../3rdparty/chromium/third_party/dawn/src/include -Igen -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/protobuf/src -Igen/protoc_out -I../../3rdparty/chromium/third_party/protobuf/src -I../../3rdparty/chromium/third_party/ced/src -I../../3rdparty/chromium/third_party/icu/source/common -I../../3rdparty/chromium/third_party/icu/source/i18n -I../../3rdparty/chromium/third_party/webrtc_overrides -I../../3rdparty/chromium/third_party/webrtc -Igen/third_party/webrtc -I../../3rdparty/chromium/third_party/abseil-cpp -I../../3rdparty/chromium/third_party/skia -I../../3rdparty/chromium/third_party/libgifcodec -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/skia/third_party/vulkanmemoryallocator -I../../3rdparty/chromium/third_party/vulkan/include -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -I../../3rdparty/chromium/third_party/crashpad/crashpad -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_mac -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/linux -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_win -I../../3rdparty/chromium/third_party/libwebm/source -I../../3rdparty/chromium/third_party/leveldatabase -I../../3rdparty/chromium/third_party/leveldatabase/src -I../../3rdparty/chromium/third_party/leveldatabase/src/include -I../../3rdparty/chromium/v8/include -Igen/v8/include -I../../3rdparty/chromium/third_party/mesa_headers -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -fno-exceptions -Wall -Wextra -D_REENTRANT -I/home/build/sysroot/usr/include/nss -I/home/build/sysroot/usr/include/nspr -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -std=gnu++1y -fno-exceptions -Wall -Wextra -D_REENTRANT -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated-declarations -c /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/ozone/gl_ozone_glx_qt.cpp -o obj/QtWebEngineCore/gl_ozone_glx_qt.o
In file included from ../../3rdparty/chromium/ui/gl/gl_bindings.h:497,
                 from ../../3rdparty/chromium/ui/gl/gl_gl_api_implementation.h:12,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/ozone/gl_ozone_glx_qt.cpp:49:
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:227:5: error: ‘EGLSetBlobFuncANDROID’ has not been declared
     EGLSetBlobFuncANDROID set,
     ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:228:5: error: ‘EGLGetBlobFuncANDROID’ has not been declared
     EGLGetBlobFuncANDROID get);
     ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:571:46: error: ‘EGLSetBlobFuncANDROID’ has not been declared
                                              EGLSetBlobFuncANDROID set,
                                              ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:572:46: error: ‘EGLGetBlobFuncANDROID’ has not been declared
                                              EGLGetBlobFuncANDROID get) = 0;
                                              ^~~~~~~~~~~~~~~~~~~~~
cc1plus: warning: unrecognized command line option ‘-Wno-deprecated-copy’
/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/QtWebEngineCore/display_gl_output_surface.o.d -DCHROMIUM_VERSION=\"80.0.3987.163\" -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DQT_NO_LINKED_LIST -DQT_NO_KEYWORDS -DQT_USE_QSTRINGBUILDER -DQ_FORWARD_DECLARE_OBJC_CLASS=QT_FORWARD_DECLARE_CLASS -DQTWEBENGINECORE_VERSION_STR=\"5.15.0\" -DQTWEBENGINEPROCESS_NAME=\"QtWebEngineProcess\" -DBUILDING_CHROMIUM -DQTWEBENGINE_EMBEDDED_SWITCHES -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_QUICK_LIB -DQT_GUI_LIB -DQT_QMLMODELS_LIB -DQT_WEBCHANNEL_LIB -DQT_QML_LIB -DQT_NETWORK_LIB -DQT_POSITIONING_LIB -DQT_CORE_LIB -DQT_WEBENGINECOREHEADERS_LIB -DVK_NO_PROTOTYPES -DGL_GLEXT_PROTOTYPES -DUSE_GLX -DUSE_EGL -DGOOGLE_PROTOBUF_NO_RTTI -DGOOGLE_PROTOBUF_NO_STATIC_INITIALIZER -DHAVE_PTHREAD -DU_USING_ICU_NAMESPACE=0 -DU_ENABLE_DYLOAD=0 -DUSE_CHROMIUM_ICU=1 -DU_STATIC_IMPLEMENTATION -DICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE -DUCHAR_TYPE=uint16_t -DWEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0 -DWEBRTC_CHROMIUM_BUILD -DWEBRTC_POSIX -DWEBRTC_LINUX -DABSL_ALLOCATOR_NOTHROW=1 -DWEBRTC_USE_BUILTIN_ISAC_FIX=1 -DWEBRTC_USE_BUILTIN_ISAC_FLOAT=0 -DHAVE_SCTP -DNO_MAIN_THREAD_WRAPPING -DSK_HAS_PNG_LIBRARY -DSK_HAS_WEBP_LIBRARY -DSK_USER_CONFIG_HEADER=\"../../skia/config/SkUserConfig.h\" -DSK_GL -DSK_HAS_JPEG_LIBRARY -DSK_USE_LIBGIFCODEC -DSK_VULKAN_HEADER=\"../../skia/config/SkVulkanConfig.h\" -DSK_VULKAN=1 -DSK_SUPPORT_GPU=1 -DSK_GPU_WORKAROUNDS_HEADER=\"gpu/config/gpu_driver_bug_workaround_autogen.h\" -DVK_NO_PROTOTYPES -DLEVELDB_PLATFORM_CHROMIUM=1 -DLEVELDB_PLATFORM_CHROMIUM=1 -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -I../../3rdparty/chromium/skia/config -I../../3rdparty/chromium/third_party -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/skia/include/core -Igen -I../../3rdparty/chromium -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/api -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include/QtWebChannel -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtNetwork -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include/QtPositioning -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0/QtWebEngineCore -I.moc -I/home/build/sysroot/opt/vc/include -I/home/build/sysroot/opt/vc/include/interface/vcos/pthreads -I/home/build/sysroot/opt/vc/include/interface/vmcs_host/linux -Igen/.moc -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/mkspecs/devices/linux-rasp-pi2-g++ -Igen -Igen -I../../3rdparty/chromium/third_party/libyuv/include -Igen -I../../3rdparty/chromium/third_party/jsoncpp/source/include -I../../3rdparty/chromium/third_party/jsoncpp/generated -Igen -Igen -I../../3rdparty/chromium/third_party/khronos -I../../3rdparty/chromium/gpu -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/perfetto/include -Igen/third_party/perfetto/build_config -Igen -Igen -Igen/third_party/dawn/src/include -I../../3rdparty/chromium/third_party/dawn/src/include -Igen -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/protobuf/src -Igen/protoc_out -I../../3rdparty/chromium/third_party/protobuf/src -I../../3rdparty/chromium/third_party/ced/src -I../../3rdparty/chromium/third_party/icu/source/common -I../../3rdparty/chromium/third_party/icu/source/i18n -I../../3rdparty/chromium/third_party/webrtc_overrides -I../../3rdparty/chromium/third_party/webrtc -Igen/third_party/webrtc -I../../3rdparty/chromium/third_party/abseil-cpp -I../../3rdparty/chromium/third_party/skia -I../../3rdparty/chromium/third_party/libgifcodec -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/skia/third_party/vulkanmemoryallocator -I../../3rdparty/chromium/third_party/vulkan/include -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -I../../3rdparty/chromium/third_party/crashpad/crashpad -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_mac -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/linux -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_win -I../../3rdparty/chromium/third_party/libwebm/source -I../../3rdparty/chromium/third_party/leveldatabase -I../../3rdparty/chromium/third_party/leveldatabase/src -I../../3rdparty/chromium/third_party/leveldatabase/src/include -I../../3rdparty/chromium/v8/include -Igen/v8/include -I../../3rdparty/chromium/third_party/mesa_headers -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -fno-exceptions -Wall -Wextra -D_REENTRANT -I/home/build/sysroot/usr/include/nss -I/home/build/sysroot/usr/include/nspr -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -std=gnu++1y -fno-exceptions -Wall -Wextra -D_REENTRANT -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated-declarations -c /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp -o obj/QtWebEngineCore/display_gl_output_surface.o
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:78: warning: "GL_FALSE" redefined
 #define GL_FALSE                          (GLboolean)0

In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:85: note: this is the location of the previous definition
 #define GL_FALSE                          0

In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:79: warning: "GL_TRUE" redefined
 #define GL_TRUE                           (GLboolean)1

In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:86: note: this is the location of the previous definition
 #define GL_TRUE                           1

In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:600:37: error: conflicting declaration of C function ‘void glShaderSource(GLuint, GLsizei, const GLchar**, const GLint*)’
 GL_APICALL void         GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar** string, const GLint* length);
                                     ^~~~~~~~~~~~~~
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:624:29: note: previous declaration ‘void glShaderSource(GLuint, GLsizei, const GLchar* const*, const GLint*)’
 GL_APICALL void GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar *const*string, const GLint *length);
                             ^~~~~~~~~~~~~~
cc1plus: warning: unrecognized command line option ‘-Wno-deprecated-copy’

I'm out of the allocated hour budget, and I'll stop here for now.

Building Qt5 has been providing some of the most nightmarish work time in my entire professional life. If my daily job became being required to deal with this kind of insanity, I would strongly invest in a change of career.

Planet DebianEnrico Zini: Build Qt5 cross-builder with raspbian sysroot: building the sysroot

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

As an attempt to get webview to compile, I'm reattempting to build a Qt5 cross-compiling environment using a raspbian sysroot, instead of having dependencies for both arm and amd64 installed in the build system.

Using dependencies installed in a straightforward way in the build system has failed because of issues like #963136, where some of the build dependencies are needed for both architectures, but the corresponding Debian -dev packages are not yet coinstallable.

This is something that causes many people much pain.


Start from a clean sysroot

Looking for a Raspbian image, I found out that it has been renamed to "Raspberry Pi OS". I realised that software names are like underwear: as soon as they become well used, they need to be changed.

I downloaded RaspbianRaspberry Pi OS Lite from https://www.raspberrypi.org/downloads/raspberry-pi-os/ to start with something minimal. It came out as something like 1.5G uncompressed, which wasn't as minimal as I would have hoped, but that'll be what I'll have to work with.

Adding build dependencies

I have acquired significant experience manipulating RaspbianRaspberry Pi OS images from working with Himblick.

This time I'm working with the disk image directly, instead of an SD card, since I will be needing it as a sysroot during the build, and I won't need to actually boot it on real hardware.

The trick is to work with kpartx to make the partitions in the image available as loopback block devices.

I have extracted a lot of relevant code from Himblick into a Python library I called Transilience

The result is this provisioning script, which is able to take a RaspbianRaspberry Pi OS image, enlarge it, and install Debian packages into it.

I find this script pretty cool, also in the way it embeds quite a bit of experience gathered on the field. I can also be integrated in a fully automated setup and provisioning system.

The next step will be to use the result as a sysroot to build Qt.

Worse Than FailureCodeSOD: Because of the Implication

Even when you’re using TypeScript, you’re still bound by JavaScript’s type system. You’re also stuck with its object system, which means that each object is really just a dict, and there’s no guarantee that any object has any given key at runtime.

Madison sends us some TypeScript code that is, perhaps not strictly bad, in and of itself, though it certainly contains some badness. It is more of a symptom. It implies a WTF.

    private _filterEmptyValues(value: any): any {
        const filteredValue = {};
        Object.keys(value)
            .filter(key => {
                const v = value[key];

                if (v === null) {
                    return false;
                }
                if (v.von !== undefined || v.bis !== undefined) {
                    return (v.von !== null && v.von !== 'undefined' && v.von !== '') ||
                        (v.bis !== null && v.bis !== 'undefined' && v.bis !== '');
                }
                return (v !== 'undefined' && v !== '');

            }).forEach(key => {
            filteredValue[key] = value[key];
        });
        return filteredValue;
    }

At a guess, this code is meant to be used as part of prepping objects for being part of a request: clean out unused keys before sending or storing them. And as a core methodology, it’s not wrong, and it’s pretty similar to your standard StackOverflow solution to the problem. It’s just… forcing me to ask some questions.

Let’s trace through it. We start by doing an Object.keys to get all the fields on the object. We then filter to remove the “empty” ones.

First, if the value is null, that’s empty. That makes sense.

Then, if the value is an object which contains a von or bis property, we’ll do some more checks. This is a weird definition of “empty”, but fine. We’ll check that they’re both non-null, not an empty string, and not… 'undefined'.

Uh oh.

We then do a similar check on the value itself, to ensure it’s not an empty string, and not 'undefined'.

What this is telling me is that somewhere in processing, sometimes, the actual string “undefined” can be stored, and it’s meant to be treated as JavaScript’s type undefined. That probably shouldn’t be happening, and implies a WTF somewhere else.

Similarly, the von and bis check has to raise a few eyebrows. If an object contains these fields, these fields must contain a value to pass this check. Why? I have no idea.

In the end, this code isn’t the WTF itself, it’s all the questions that it raises that tell me the shape of the WTF. It’s like looking at a black hole: I can’t see the object itself, I can only see the effect it has on the space around it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRussell Coker: Windows 10 on Debian under KVM

Here are some things that you need to do to get Windows 10 running on a Debian host under KVM.

UEFI Booting

UEFI is big and complex, but most of what it does isn’t needed at all. If all you want to do is boot from an image of a disk with a GPT partition table then you just install the package ovmf and add something like the following to your KVM start script:

UEFI"-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

Note that some of the documentation on this doesn’t have the OVMF_VARS.fd file set to readonly. Allowing writes to that file means that the VM boot process (and maybe later) can change EFI variables that affect later boots and other VMs if they all share the same file. For a basic boot you don’t need to change variables so you want it read-only. Also having it read-only is necessary if you want to run KVM as non-root.

As an experiment I tried booting without the OVMF_VARS.fd file, it didn’t boot and then even after configuring it to use the OVMF_VARS.fd file again Windows gave a boot error about the “boot configuration data file” that required booting from recovery media. Apparently configuration mistakes with EFI can mess up the Windows installation, so be careful and backup the Windows installation regularly!

Linux can boot from EFI but you generally don’t want to unless the boot device is larger than 2TB. It’s relatively easy to convert a Linux installation on a GPT disk to a virtual image on a DOS partition table disk or on block devices without partition tables and that gives a faster boot. If the same person runs the host hardware and the VMs then the best choice for Linux is to have no partition tables just one filesystem per block device (which makes resizing much easier) and have the kernel passed as a parameter to kvm. So booting a VM from EFI is probably only useful for booting Windows VMs and for Linux boot loader development and testing.

As an aside, the Debian Wiki page about Secure Boot on a VM [4] was useful for this. It’s unfortunate that it and so much of the documentation about UEFI is about secure boot which isn’t so useful if you just want to boot a system without regard to the secure boot features.

Emulated IDE Disks

Debian kernels (and probably kernels from many other distributions) are compiled with the paravirtualised storage device drivers. Windows by default doesn’t support such devices so you need to emulate an IDE/SATA disk so you can boot Windows and install the paravirtualised storage driver. The following configuration snippet has a commented line for paravirtualised IO (which is fast) and an uncommented line for a virtual IDE/SATA disk that will allow an unmodified Windows 10 installation to boot.

#DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"
DRIVE="-drive id=disk,format=raw,file=/home/kvm/windows10,if=none -device ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0"

Spice Video

Spice is an alternative to VNC, Here is the main web site for Spice [1]. Spice has many features that could be really useful for some people, like audio, sharing USB devices from the client, and streaming video support. I don’t have a need for those features right now but it’s handy to have options. My main reason for choosing Spice over VNC is that the mouse cursor in the ssvnc doesn’t follow the actual mouse and can be difficult or impossible to click on items near edges of the screen.

The following configuration will make the QEMU code listen with SSL on port 1234 on all IPv4 addresses. Note that this exposes the Spice password to anyone who can run ps on the KVM server, I’ve filed Debian bug #965061 requesting the option of a password file to address this. Also note that the “qxl” virtual video hardware is VGA compatible and can be expected to work with OS images that haven’t been modified for virtualisation, but that they work better with special video drivers.

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl

To connect to the Spice server I installed the spice-client-gtk package in Debian and ran the following command:

spicy -h kvm.example.com -s 1234 -w xxxxxxxx

Note that this exposes the Spice password to anyone who can run ps on the system used as a client for Spice, I’ve filed Debian bug #965060 requesting the option of a password file to address this.

This configuration with an unmodified Windows 10 image only supported 800*600 resolution VGA display.

Networking

To setup bridged networking as non-root you need to do something like the following as root:

chgrp kvm /usr/lib/qemu/qemu-bridge-helper
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
mkdir -p /etc/qemu
echo "allow all" > /etc/qemu/bridge.conf
chgrp kvm /etc/qemu/bridge.conf
chmod 640 /etc/qemu/bridge.conf

Windows 10 supports the emulated Intel E1000 network card. Configuration like the following configures networking on a bridge named br0 with an emulated E1000 card. MAC addresses that have a 1 in the second least significant bit of the first octet are “locally administered” (like IPv4 addresses starting with “10.”), see the Wikipedia page about MAC Address for details.

The following is an example of network configuration where $ID is an ID number for the virtual machine. So far I haven’t come close to 256 VMs on one network so I’ve only needed one octet.

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

Final KVM Settings

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
SPICE="-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl"

UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

kvm -m 4000 -smp 2 $SPICE $UEFI $DRIVE $NET

Windows Settings

The Spice Download page has a link for “spice-guest-tools” that has the QNX video driver among other things [2]. This seems to be needed for resolutions greater than 800*600.

The Virt-Manager Download page has a link for “virt-viewer” which is the Spice client for Windows systems [3], they have MSI files for both i386 and AMD64 Windows.

It’s probably a good idea to set display and system to sleep after never (I haven’t tested what happens if you don’t do that, but there’s no benefit in sleeping). Before uploading an image I disabled the pagefile and set the partition to the minimum size so I had less data to upload.

Problems

Here are some things I haven’t solved yet.

The aSpice Android client for the Spice protocol fails to connect with the QEMU code at the server giving the following message on stderr: “error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:../ssl/record/rec_layer_s3.c:1544:SSL alert number 48“.

Spice is supposed to support dynamic changes to screen resolution on the VM to match the window size at the client, this doesn’t work for me, not even with the Red Hat QNX drivers installed.

The Windows Spice client doesn’t seem to support TLS, I guess running some sort of proxy for TLS would work but I haven’t tried that yet.

,

Planet DebianEvgeni Golov: Scanning with a Brother MFC-L2720DW on Linux without any binary blobs

Back in 2015, I've got a Brother MFC-L2720DW for the casual "I need to print those two pages" and "I need to scan these receipts" at home (and home-office ;)). It's a rather cheap (I paid less than 200€ in 2015) monochrome laser printer, scanner and fax with a (well, two, wired and wireless) network interface. In those five years I've never used the fax or WiFi functions, but printed a scanned a few pages.

Brother offers Linux drivers, but those are binary blobs which I never really liked to run.

The printer part works just fine with a "Generic PCL 6/PCL XL" driver in CUPS or even "driverless" via AirPrint on Linux. You can also feed it plain PostScript, but I found it rather slow compared to PCL. On recent Androids it works using the built in printer service or Mopria Printer Service for older ones - I used to joke "why would you need a printer on your phone?!", but found it quite useful after a few tries.

However, for the scanner part I had to use Brother's brscan4 driver on Linux and their iPrint&Scan app on Android - Mopria Scan wouldn't support it.

Until, last Friday, I've seen a NEW package being uploaded to Debian: sane-airscan. And yes, monitoring the Debian NEW queue via Twitter is totally legit!

sane-airscan is an implementation of Apple's AirScan (eSCL) and Microsoft's WSD/WS-Scan protocols for SANE. I've never heard of those before - only about AirPrint, but thankfully this does not mean nobody has reverse-engineered them and created something that works beautifully on Linux. As of today there are no packages in the official Fedora repositories and the Debian ones are still in NEW, however the upstream documentation refers to an openSUSE OBS repository that works like a charm in the meantime (on Fedora 32).

The only drawback I've seen so far: the scanner only works in "Color" mode and there is no way to scan in "Grayscale", making scanning a tad slower. This has been reported upstream and might or might not be fixable, as it seems the device does not announce any mode besides "Color".

Interestingly, SANE has an eSCL backend on its own since 1.0.29, but it's disabled in Fedora in favor of sane-airscan even though the later isn't available in Fedora yet. However, it might not even need separate packaging, as SANE upstream is planning to integrate it into sane-backends directly.

Planet DebianUtkarsh Gupta: GSoC Phase 2

Hello,

In early May, I got selected as a Google Summer of Code student for Debian to work on a project which is to write a linter (an extension to RuboCop).
This tool is mostly to help the Debian Ruby team. And that is the best part, I love working in/for/with the Ruby team!
(I’ve been an active part of the team for 19 months now :))

More details about the project can be found here, on the wiki.
And also, I have got the best mentors I could’ve possibly asked for: Antonio Terceiro and David Rodríguez 💖

So, the program began on 1st June and I’ve been working since then. I log my daily updates at gsocwithutkarsh2102.tk.
The blog for the first part of phase 1 can be found here and that of second part of phase 1 can be found here.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts here:

  • After the, what I’d like to call, successful Phase 1, whilst using this extension on GitLab’s omnibus-gitlab repository, I discovered a bug (as reported via issue #5), which basically threw the following error:

    An error occurred while Packaging/GemspecGit cop was inspecting /home/utkarsh/github/omnibus-gitlab/files/gitlab-cookbooks/gitlab/recipes/bootstrap_disable.rb.
    To see the complete backtrace run rubocop -d.

    …which was not good.

  • This bug was a false-negative and the fix turned out to be simple, which was raised via PR #6.
    Since the extension was now working fine against omnibus-gitlab (which is…quite huge!), I rolled out a v0.1.1 release! �

  • Next, I implemented the 2nd cop, fixing require and require_relative calls which map from spec(s)/test(s) to the lib directory.
    This was raised via a WIP PR #4.

  • We had our meeting where we discussed that it would rather make sense to split these cops into two parts and also, thanks to David’s elaborative comment on the situation.
    With this, we decided to split the cops into two.

  • Then I worked on:

  • At this point, we hit yet another obstacle. Correctly determining the root directory of a project at runtime. This is…tricky. Not impossible (of course) but tricky.
    So my “homework� was to find such a thing that does that by default.

  • For the next 4 days, I tried to find something that could do this bit. But unfortunately, I couldn’t.
    So I wrote one myself. I wrote get_root, which solves this problem.
    The only thing that one needs to take care while using get_root is that it has to be a git repository. That’s a…trade-off.

  • In the next meeting, we discussed that this is a bit of an overkill. get_root should’ve been written as a helper function and not as another library, which David and Antonio pointed out correctly. But it was already too late :P
    I had already made a v 0.1.0 release.
    (So in case you’re a Rubyist and want to find the root directory of a git repository, consider using this :P)
    Besides, Antonio also pointed out that it should take another arguement as to from point from where the file is being inspected. Hm, unsure how to do that..

  • Meanwhile, I exported get_root as a helper function, David solved all the problem altogether at once via rubocop’s PR #8314.
    This introduced a new (public) method #project_root which superseeded get_root and one could now get the root directory via RuboCop::ConfigLoader.project_root. Ain’t he amazing!? \o/

  • This also means, I reverted those changes altogether and tweaked my WIP PR to inculcate these changes via commit @b06a8f86.
    However, the specs fail. But that doesn’t mean that the changes aren’t correct :P
    They are pretty much right and working fine. To make that sure, I locally installed this library and used on other projects to make sure that it indeed is working alright, as it should! \o/

  • And here I am on the 15th day :)

Well, the best part yet?
rubocop-packaging is being used by batalert, arbre, rspec-stubbed_env, rspec-pending_for, ISO8601, get_root (😛), gir_ffi, linter, and cucumber-rails.

Whilst it has been a lot of fun so far, my plate has started to almost…overflow. It seems that I’ve got a lot of things to work on (and already things that are due!).
From my major project, college *stuff to my GSoC project, Debian (E)LTS, and a lot *more.

Thanks to Antonio for helping me out with *other things (which maps back to his sayings in Paris \o/).


Until next time.
:wq for today.

CryptogramNSA on Securing VPNs

The NSA's Central Security Service -- that's the part that's supposed to work on defense -- has released two documents (a full and an abridged version) on securing virtual private networks. Some of it is basic, but it contains good information.

Maintaining a secure VPN tunnel can be complex and requires regular maintenance. To maintain a secure VPN, network administrators should perform the following tasks on a regular basis:

  • Reduce the VPN gateway attack surface
  • Verify that cryptographic algorithms are Committee on National Security Systems Policy (CNSSP) 15-compliant
  • Avoid using default VPN settings
  • Remove unused or non-compliant cryptography suites
  • Apply vendor-provided updates (i.e. patches) for VPN gateways and clients

Planet DebianJonathan Dowland: Lockdown music

Last Christmas, to make room for a tree, I dis-assembled my hifi unit and (temporarily, I thought) lugged my hi-fi and records up to the study.

Power, Corruption & Lies

I had been thinking about expanding the amount of storage I had for hifi and vinyl, perhaps moving from 2x2 storage cubes to 2x3, although Ikea don't make a 2x3 version of their stalwart vinyl storage line, the Kallax. I had begun exploring other options, both at Ikea and other places. Meanwhile, I re-purposed my old Expedit unit as storage for my daughter's Sylvanian Families.

Under Lockdown, I've spent a lot more time in my study, and so I've set up the hifi there. It turns out I have a lot more opportunity to enjoy the records up here, during work, and I've begun to explore some things which I haven't listened to in a long time, or possibly ever. I thought I'd start keeping track of some of them.

Power, Corruption and Lies is not something rarely listened to. It's steadily become my favourite New Order album. When I came across this copy (Factory, 1983), it was in pristine condition, but it now bears witness to my (mostly careful) use. There's now a scratch somewhere towards the end of the first track Age of Consent which causes my turntable to loop. By some good fortune the looping point is perfectly aligned to a bar. I don't always notice it straight away. This record rarely makes it back down from the turntable to where it's supposed to live.

Worse Than FailureCodeSOD: Dates by the Dozen

Before our regularly scheduled programming, Code & Supply, a developer community group we've collaborated with in the past, is running a salary survey, to gauge the state of the industry. More responses are always helpful, so I encourage you to take a few minutes and pitch in.

Cid was recently combing through an inherited Java codebase, and it predates Java 8. That’s a fancy way of saying “there were no good date builtins, just a mess of cruddy APIs”. That’s not to say that there weren’t date builtins prior to Java 8- they were just bad.

Bad, but better than this. Cid sent along a lot of code, and instead of going through it all, let’s get to some of the “highlights”. Much of this is stuff we’ve seen variations on before, but have been combined in ways to really elevate the badness. There are dozens of these methods, which we are only going to look at a sample of.

Let’s start with the String getLocalDate() method, which attempts to construct a timestamp in the form yyyyMMdd. As you can already predict, it does a bunch of string munging to get there, with blocks like:

switch (calendar.get(Calendar.MONTH)){
      case Calendar.JANUARY:
        sb.append("01");
        break;
      case Calendar.FEBRUARY:
        sb.append("02");
        break;
      …
}

Plus, we get the added bonus of one of those delightful “how do I pad an integer out to two digits?” blocks:

if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
  sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
}
else {
  sb.append(calendar.get(Calendar.DAY_OF_MONTH));
}

Elsewhere, they expect a timestamp to be in the form yyyyMMddHHmmssZ, so they wrote a handy void checkTimestamp method. Wait, void you say? Shouldn’t it be boolean?

Well here’s the full signature:

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Why return a boolean when you can throw an exception on bad input? Unless the bad input is a null, in which case:

if (timestamp == null) {
  return;
}

Nulls are valid timestamps, which is useful to know. We next get a lovely block of checking each character to ensure that they’re digits, and a second check to ensure that the last is the letter Z, which turns out to be double work, since the very next step is:

int year = Integer.parseInt(timestamp.substring(0,4));
int month = Integer.parseInt(timestamp.substring(4,6));
int day = Integer.parseInt(timestamp.substring(6,8));
int hour = Integer.parseInt(timestamp.substring(8,10));
int minute = Integer.parseInt(timestamp.substring(10,12));
int second = Integer.parseInt(timestamp.substring(12,14));

Followed by a validation check for day and month:

if (day < 1) {
  throw new IOException(msg);
}
if ((month < 1) || (month > 12)) {
  throw new IOException(msg);
}
if (month == 2) {
  if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
    if (day > 29) {
      throw new IOException(msg);
    }
  }
  else {
    if (day > 28) {
      throw new IOException(msg);
  }
  }
}
if (month == 1 || month == 3 || month == 5 || month == 7
|| month == 8 || month == 10 || month == 12) {
  if (day > 31) {
    throw new IOException(msg);
  }
}
if (month == 4 || month == 6 || month == 9 || month == 11) {
  if (day > 30) {
    throw new IOException(msg);
  }
"

The upshot is they at least got the logic right.

What’s fun about this is that the original developer never once considered “maybe I need an intermediate data structure beside a string to manipulate dates”. Nope, we’re just gonna munge that string all day. And that is our entire plan for all date operations, which brings us to the real exciting part, where this transcends from “just regular old bad date code” into full on WTF territory.

Would you like to see how they handle adding units of time? Like days?

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

The only thing I can say is that here they realized that “hey, wait, maybe I can modularize” and figured out how to stuff their “how many days are in a month” logic into getDaysOfMonth, which you can see invoked above.

Beyond that, they manually handle carrying, and never once pause to think, “hey, maybe there’s a better way”.

And speaking of repeating code, guess what- there’s also a public static String additionOfSeconds(String timestamp, int intervall) method, too.

There are dozens of similar methods, Cid has only provided us a sample. Cid adds:

This particular developer didn’t trust in too fine modularization and code reusing (DRY!). So for every of this dozen of methods, he has implemented these date parsing/formatting algorithms again and again! And no, not just copy/paste; every time it is a real wheel-reinvention. The code blocks and the position of single code lines look different for every method.

Once Cid got too frustrated by this code, they went and reimplemented it in modern Java date APIs, shrinking the codebase by hundreds of lines.

The full blob of code Cid sent in follows, for your “enjoyment”:

public static String getLocalDate() {
  TimeZone tz = TimeZone.getDefault();
  GregorianCalendar calendar = new GregorianCalendar(tz);
  calendar.setTime(new Date());
  StringBuffer sb = new StringBuffer();
  sb.append(calendar.get(Calendar.YEAR));
  switch (calendar.get(Calendar.MONTH)){
    case Calendar.JANUARY:
      sb.append("01");
      break;
    case Calendar.FEBRUARY:
      sb.append("02");
      break;
    case Calendar.MARCH:
      sb.append("03");
      break;
    case Calendar.APRIL:
      sb.append("04");
      break;
    case Calendar.MAY:
      sb.append("05");
      break;
    case Calendar.JUNE:
      sb.append("06");
      break;
    case Calendar.JULY:
      sb.append("07");
      break;
    case Calendar.AUGUST:
      sb.append("08");
      break;
    case Calendar.SEPTEMBER:
      sb.append("09");
      break;
    case Calendar.OCTOBER:
      sb.append("10");
      break;
    case Calendar.NOVEMBER:
      sb.append("11");
      break;
    case Calendar.DECEMBER:
      sb.append("12");
      break;
  }
  if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
    sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
  }
  else {
    sb.append(calendar.get(Calendar.DAY_OF_MONTH));
  }
  return sb.toString();
}

public static void checkTimestamp(String timestamp, String name)
throws IOException {
  if (timestamp == null) {
    return;
  }
  String msg = new String(
      "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
  int len = timestamp.length();
  if (len != 15) {
    throw new IOException(msg);
  }
  for (int i = 0; i < (len - 1); i++) {
    if (! Character.isDigit(timestamp.charAt(i))) {
      throw new IOException(msg);
    }
  }
  if (timestamp.charAt(len - 1) != 'Z') {
    throw new IOException(msg);
  }
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  if (day < 1) {
    throw new IOException(msg);
  }
  if ((month < 1) || (month > 12)) {
    throw new IOException(msg);
  }
  if (month == 2) {
    if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
      if (day > 29) {
        throw new IOException(msg);
      }
    }
    else {
      if (day > 28) {
        throw new IOException(msg);
    }
    }
  }
  if (month == 1 || month == 3 || month == 5 || month == 7
  || month == 8 || month == 10 || month == 12) {
    if (day > 31) {
      throw new IOException(msg);
    }
  }
  if (month == 4 || month == 6 || month == 9 || month == 11) {
    if (day > 30) {
      throw new IOException(msg);
    }
  }
  if ((hour < 0) || (hour > 24)) {
    throw new IOException(msg);
  }
  if ((minute < 0) || (minute > 59)) {
    throw new IOException(msg);
  }
  if ((second < 0) || (second > 59)) {
    throw new IOException(msg);
  }
}

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

public static String additionOfSeconds(String timestamp, int intervall) {
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  int new_second = (second + intervall) % 60;
  int minute_intervall = (second + intervall) / 60;
  int new_minute = (minute + minute_intervall) % 60;
  int hour_intervall = (minute + minute_intervall) / 60;
  int new_hour = (hour + hour_intervall) % 24;
  int day_intervall = (hour + hour_intervall) / 24;
  StringBuffer new_time = new StringBuffer();
  if (new_hour < 10) {
    new_time.append("0" + new_hour + "");
  }
  else {
    new_time.append("" + new_hour + "");
  }
  if (new_minute < 10) {
    new_time.append("0" + new_minute + "");
  }
  else {
    new_time.append("" + new_minute + "");
  }
  if (new_second < 10) {
    new_time.append("0" + new_second + "");
  }
  else {
    new_time.append("" + new_second + "");
  }
  if (day_intervall > 0) {
    return additionOfDays(timestamp.substring(0,8) + new_time.toString() + "Z", day_intervall);
  }
  else {
    return (timestamp.substring(0,8) + new_time.toString() + "Z");
  }
}

public static int getDaysOfMonth(int year, int month) {
  int lastDayOfMonth = 31;
  switch (month) {
    case 1: case 3: case 5: case 7: case 8: case 10: case 12:
      lastDayOfMonth = 31;
      break;
    case 2:
      if ((year % 4 == 0 && year % 100 != 0) || year %400 == 0) {
        lastDayOfMonth = 29;
      }
      else {
        lastDayOfMonth = 28;
      }
      break;
    case 4: case 6: case 9: case 11:
      lastDayOfMonth = 30;
      break;
  }
  return lastDayOfMonth;
}
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

CryptogramEFF's 30th Anniversary Livestream

It's the EFF's 30th birthday, and the organization is having a celebratory livestream today from 3:00 to 10:00 pm PDT.

There are a lot of interesting discussions and things. I am having a fireside chat at 4:10 pm PDT to talk about the Crypto Wars and more.

Stop by. And thank you for supporting EFF.

EDITED TO ADD: This event is over, but you can watch a recorded version on YouTube.

Planet DebianAntoine Beaupré: Goatcounter analytics in ikiwiki

I have started using Goatcounter for analytics after reading this LWN article called "Lightweight alternatives to Google Analytics". Goatcounter has an interesting approach to privacy in that it:

tracks sessions using a hash of the browser's user agent and IP address to identify the client without storing any personal information. The salt used to generate these hashes is rotated every 4 hours with a sliding window.

There was no Debian package for the project, so I filed a request for package and instead made a fork of the project to add a Docker image.

This page documents how Goatcounter was setup from there...

Server configuration

  1. build the image from this fork

    docker build -t zgoat/goatcounter .
    
  2. create volume for db:

    docker volume create goatcounter
    
  3. start the server:

    exec docker run --restart=unless-stopped --volume="goatcounter:/home/user/db/" --publish 127.0.0.1:8081:8080 --detach zgoat/goatcounter serve -listen :8080 -tls none
    
  4. apache configuration:

    <VirtualHost *:80>
                ServerName analytics.anarc.at
                Redirect / https://analytics.anarc.at/
                DocumentRoot /var/www/html/
        </VirtualHost>
    
    <VirtualHost *:443>
            ServerName analytics.anarc.at
            Use common-letsencrypt-ssl analytics.anarc.at
            DocumentRoot /var/www/html/
            ProxyPass /.well-known/ !
            ProxyPass / http://localhost:8081/
            ProxyPassReverse / http://localhost:8081/
            ProxyPreserveHost on
    </VirtualHost>
    
  5. add analytics.anarc.at to DNS

  6. create a TLS cert with LE:

    certbot certonly --webroot  -d analytics.anarc.at --webroot-path /var/www/html/
    

    note that goatcounter has code to do this on its own, but we avoid it to follow our existing policies and simplify things

  7. create site:

    docker run -it --rm --volume="goatcounter:/home/user/db/" zgoat/goatcounter create -domain analytics.anarc.at -email anarcat+rapports@anarc.at
    
  8. add to ikiwiki template

  9. rebuild wiki:

    ikiwiki --setup ikiwiki.setup --rebuild --verbose
    

Remaining issues

  • Docker image should be FROM scratch, this is statically built golang stuff after all...
  • move to Docker Compose or podman instead of just starting the thing by hand
  • this is all super janky and should be put in config management somehow
  • remove "anarc.at" test site (the site is the analytics site, not the tracked site), seems like this is not possible yet
  • do log parsing instead of Javascript or 1x1 images?
  • compare with goaccess logs, probably at the end of july, to have two full weeks to compare

Fixed issues

  • cache headers are wrong (120ms!) deployed workaround in apache, reported as a bug upstream
  • remove self-referer done, just a matter of configuring the URL in the settings. could this be automated too?
  • add pixel tracking for noscript users done, but required a patch to ikiwi (and I noticed another bug while doing it)
  • goatcounter monitor doesn't with sqlite (fixed upstream!)
  • the :8080 port leaks in some places, namely in the "Site config" documentation that is because i was using -port 8080 which was not necessary.

,

Planet DebianIan Jackson: MessagePack vs CBOR (RFC7049)

tl;dr: Use MessagePack, rather than CBOR.

Introduction

I recently wanted to choose a binary encoding. This was for a project using Rust serde, so I looked at the list of formats there. I ended up reading about CBOR and MessagePack.

Both of these are binary formats for a JSON-like data model. Both of them are "schemaless", meaning you can decode them without knowing the structure. (This also provides some forwards compatibility.) They are, in fact, quite similar (although they are totally incompatible). This is no accident: CBOR is, effectively, a fork of MessagePack.

Both formats continue to exist and both are being used in new programs. I needed to make a choice but lacked enough information. I thought I would try to examine the reasons and nature of the split, and to make some kind of judgement about the situation. So I did a lot of reading [11]. Here are my conclusions.

History and politics

Between about 2010 and 2013 there was only MessagePack. Unfortunately, MessagePack had some problems. The biggest of these was that it lacked a separate string type. Strings were to be encoded simply as byte blocks. This caused serious problems for many MessagePack library implementors: for example, when decoding a MessagePack file the Python library wouldn't know whether to produce a Python bytes object, or a string. Straightforward data structures wouldn't round trip through MessagePack. [1] [2]

It seems that in late 2012 this came to the attention to someone with an IETF background. According to them, after unsatisfactory conversations with MessagePack upstream, they decided they would have to fork. They submitted an Internet Draft for a partially-incompatible protocol [3] [4]. Little seemed to happen in the IETF until soon before the Orlando in-person IETF meeting in February 2013.[5]

These conversations sparked some discussion in the MessagePack issue tracker. There were long threads including about process [1,2,4 ibid]. But there was also a useful technical discussion, about proposed backward compatible improves to the MessagePack spec.[5] The prominent IETF contributor provided some helpful input in these discussions in the MessagePack community - but also pushed quite hard for a "tagging" system, which suggestion was not accepted (see my technical analysis, below).

An improved MessagePack spec resulted, with string support, developed largely by the MessagePack community. It seems to have been available in useable form since mid-2013 and was officially published as canonical in August 2013.

Meanwhile a parallel process was pursued in the IETF, based on the IETF contributor's fork, with 11 Internet-Drafts from February[7] to September[8]. This seems to have continued even though the original technical reason for the fork - lack of string vs binary distinction - no longer applied. The IETF proponent expressed unhappiness about MessagePack's stewardship and process as much as they did about the technical details [4, ibid]. The IETF process culminated in the CBOR RFC[9].

The discussion on process questions between the IETF proponent and MessagePack upstream, in the MessagePack issue tracker [4, ibid] should make uncomfortable reading for IETF members. The IETF acceptance of CBOR despite clear and fundamental objections from MessagePack upstream[13] and indeed other respected IETF members[14], does not reflect well on the IETF. The much vaunted openness of the IETF process seems to have been rather one-sided. The IETF proponent here was an IETF Chair. Certainly the CBOR author was very well-spoken and constantly talks about politeness and cooperation and process; but what they actually did was very hostile. They accused the MessagePack community of an "us and them" attitude while simultaneously pursuing a forked specification!

The CBOR RFC does mention MessagePack in Appendix E.2. But not to acknowledge that CBOR was inspired by MessagePack. Rather, it does so to make a set of tendentious criticisms of MessagePack. Perhaps these criticisms were true when they were first written in an I-D but they were certainly false by the time the RFC was actually published, which occurred after the MessagePack improvement process was completely concluded, with a formal spec issued.

Since then both formats have existed in parallel. Occasionally people discuss which one is better, and sometimes it is alleged that "yes CBOR is the successor to MessagePack", which is not really fair.[9][10]

Technical differences

The two formats have a similar arrangement: initial byte which can encode small integers, or type and length, or type and specify a longer length encoding. But there are important differences. Overall, MessagePack is very significantly simpler.

Floating point

CBOR supports five floating point formats! Not only three sizes of IEEE754, but also decimal floating point, and bigfloats. This seems astonishing for a supposedly-simple format. (Some of these are supported via the semi-optional tag mechanism - see below.)

Indefinite strings and arrays

Like MessagePack, CBOR mostly precedes items with their length. But CBOR also supports "indefinite" strings, arrays, and so on, where the length is not specified at the beginning. The object (array, string, whatever) is terminated by a special "break" item. This seems to me to be a mistake. If you wanted the kind of application where MessagePack or CBOR would be useful, streaming sub-objects of unknown length is not that important. This possibility considerably complicates decoders.

CBOR tagging system

CBOR has a second layer of sort-of-type which can be attached to each data item. The set of possible tags is open-ended and extensible, but the CBOR spec itself gives tag values for: two kinds of date format; positive and negative bignums; decimal floats (see above); binary but expected to be encoded if converted to JSON (in base64url, base64, or base16); nestedly encoded CBOR; URIs; base64 data (two formats); regexps; MIME messages; and a special tag to make file(1) work.

In practice it is not clear how many of these are used, but a decoder must be prepared to at least discard them. The amount of additional spec complexity here is quite astonishing. IMO binary formats like this will (just like JSON) be used by a next layer which always has an idea of what the data means, including (where the data is a binary blob) what encoding it is in etc. So these tags are not useful.

These tags might look like a middle way between (i) extending the binary protocol with a whole new type such as an extension type (incompatible with old readers) and encoding your new kind data in a existing type (leaving all readers who don't know the schema to print it as just integers or bytes or string). But I think they are more trouble than they are worth.

The tags are uncomfortably similar to the ASN.1 tag system, which is widely regarded as one of ASN.1's unfortunate complexities.

MessagePack extension mechanism

MessagePack explicitly reserves some encoding space for users and for future extensions: there is an "extension type". The payload is an extension type byte plus some more data bytes; the data bytes are in a format to be defined by the extension type byte. Half of the possible extension byte values are reserved for future specification, and half are designated for application use. This is pleasingly straightforward.

(There is also one unused primary initial byte value, but that would be rejected by existing decoders and doesn't seem like a likely direction for future expansion.)

Minor other differences in integer encoding

The encodings of integers differ.

In MessagePack, signed and unsigned integers have different typecodes. In CBOR, signed and unsigned positive integers have the same typecodes; negative integers have a different set of typecodes. This means that a CBOR reader which knows it is expecting a signed value will have to do a top-bit-set check on the actual data value! And a CBOR writer must check the value to choose a typecode.

MessagePack reserves fewer shortcodes for small negative integers, than for small positive integers.

Conclusions and lessons

MessagePack seems to have been prompted into fixing the missing string type problem, but only by the threat of a fork. However, this fork went ahead even after MessagePack clearly accepted the need for a string type. MessagePack had a fixed protocol spec before the IETF did.

The continued pursuit of the IETF fork was ostensibly been motivated by a disapproval of the development process and in particular a sense that the IETF process was superior. However, it seems to me that the IETF process was abused by CBOR's proponent, who just wanted things their own way. I have seen claims by IETF proponents that the open decisionmaking system inherently produces superior results. However, in this case the IETF process produced a bad specification. To the extent that other IETF contributors had influence over the ultimate CBOR RFC, I don't think they significantly improved it. CBOR has been described as MessagePack bikeshedded by the IETF. That would have been bad enough, but I think it's worse than that. To a large extent CBOR is one person's NIH-induced bad design rubber stamped by the IETF. CBOR's problems are not simply matters of taste: it's significantly overcomplicated.

One lesson for the rest of us is that although being the upstream and nominally in charge of a project seems to give us a lot of power, it's wise to listen carefully to one's users and downstreams. Once people are annoyed enough to fork, the fork will have a life of its own.

Another lesson is that many of us should be much warier of the supposed moral authority of the IETF. Many IETF standards are awful (Oauth 2 [12]; IKE; DNSSEC; the list goes on). Sometimes (especially when network adoption effects are weak, as with MessagePack vs CBOR) better results can be obtained from a smaller group, or even an individual, who simply need the thing for their own uses.

Finally, governance systems of public institutions like the IETF need to be robust in defending the interests of outsiders (and hence of society at large) against eloquent insiders who know how to work the process machinery. Any institution which nominally serves the public good faces a constant risk of devolving into self-servingness. This risk gets worse the more powerful and respected the institution becomes.

References

  1. #13: First-class string type in serialization specification (MessagePack issue tracker, June 2010 - August 2013)
  2. #121: Msgpack can't differentiate between raw binary data and text strings (MessagePack issue tracker, November 2012 - February 2013)
  3. draft-bormann-apparea-bpack-00: The binarypack JSON-like representation format (IETF Internet-Draft, October 2012)
  4. #129: MessagePack should be developed in an open process (MessagePack issue tracker, February 2013 - March 2013)
  5. Re: JSON mailing list and BoF (IETF apps-discuss mailing list message from Carsten Bormann, 18 February 2013)
  6. #128: Discussions on the upcoming MessagePack spec that adds the string type to the protocol (MessagePack issue tracker, February 2013 - August 2013)
  7. draft-bormann-apparea-bpack-01: The binarypack JSON-like representation format (IETF Internet-Draft, February 2013)
  8. draft-bormann-cbor: Concise Binary Object Representation (CBOR) (IETF Internet-Drafts, May 2013 - September 2013)
  9. RFC 7049: Concise Binary Object Representation (CBOR) (October 2013)
  10. "MessagePack should be replaced with [CBOR] everywhere ..." (floatboth on Hacker News, 8th April 2017)
  11. Discussion with very useful set of history links (camgunz on Hacker News, 9th April 2017)
  12. OAuth 2.0 and the Road to Hell (Eran Hammer, blog posting from 2012, via Wayback Machine)
  13. Re: [apps-discuss] [Json] msgpack/binarypack (Re: JSON mailing list and BoF) (IETF list message from Sadyuki Furuhashi, 4th March 2013)
  14. "no apologies for complaining about this farce" (IETF list message from Phillip Hallam-Baker, 15th August 2013)
    Edited 2020-07-14 18:55 to fix a minor formatting issue, and 2020-07-14 22:54 to fix two typos



comment count unavailable comments

Krebs on Security‘Wormable’ Flaw Leads July Microsoft Patches

Microsoft today released updates to plug a whopping 123 security holes in Windows and related software, including fixes for a critical, “wormable” flaw in Windows Server versions that Microsoft says is likely to be exploited soon. While this particular weakness mainly affects enterprises, July’s care package from Redmond has a little something for everyone. So if you’re a Windows (ab)user, it’s time once again to back up and patch up (preferably in that order).

Top of the heap this month in terms of outright scariness is CVE-2020-1350, which concerns a remotely exploitable bug in more or less all versions of Windows Server that attackers could use to install malicious software simply by sending a specially crafted DNS request.

Microsoft said it is not aware of reports that anyone is exploiting the weakness (yet), but the flaw has been assigned a CVSS score of 10, which translates to “easy to attack” and “likely to be exploited.”

“We consider this to be a wormable vulnerability, meaning that it has the potential to spread via malware between vulnerable computers without user interaction,” Microsoft wrote in its documentation of CVE-2020-1350. “DNS is a foundational networking component and commonly installed on Domain Controllers, so a compromise could lead to significant service interruptions and the compromise of high level domain accounts.”

CVE-2020-1350 is just the latest worry for enterprise system administrators in charge of patching dangerous bugs in widely-used software. Over the past couple of weeks, fixes for flaws with high severity ratings have been released for a broad array of software products typically used by businesses, including Citrix, F5, Juniper, Oracle and SAP. This at a time when many organizations are already short-staffed and dealing with employees working remotely thanks to the COVID-19 pandemic.

The Windows Server vulnerability isn’t the only nasty one addressed this month that malware or malcontents can use to break into systems without any help from users. A full 17 other critical flaws fixed in this release tackle security weaknesses that Microsoft assigned its most dire “critical” rating, such as in Office, Internet Exploder, SharePoint, Visual Studio, and Microsoft’s .NET Framework.

Some of the more eyebrow-raising critical bugs addressed this month include CVE-2020-1410, which according to Recorded Future concerns the Windows Address Book and could be exploited via a malicious vcard file. Then there’s CVE-2020-1421, which protects against potentially malicious .LNK files (think Stuxnet) that could be exploited via an infected removable drive or remote share. And we have the dynamic duo of CVE-2020-1435 and CVE-2020-1436, which involve problems with the way Windows handles images and fonts that could both be exploited to install malware just by getting a user to click a booby-trapped link or document.

Not to say flaws rated “important” as opposed to critical aren’t also a concern. Chief among those is CVE-2020-1463, a problem within Windows 10 and Server 2016 or later that was detailed publicly prior to this month’s Patch Tuesday.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a particular Windows update to hose one’s system or prevent it from booting properly, and some updates even have been known to erase or corrupt files. Last month’s bundle of joy from Microsoft sent my Windows 10 system into a perpetual crash state. Thankfully, I was able to restore from a recent backup.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Also, keep in mind that Windows 10 is set to apply patches on its own schedule, which means if you delay backing up you could be in for a wild ride. If you wish to ensure the operating system has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches whenever it sees fit, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Also, keep an eye on the AskWoody blog from Woody Leonhard, who keeps a reliable lookout for buggy Microsoft updates each month.

Planet DebianMarkus Koschany: My Free Software Activities in June 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in July) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Short news

  • The last month saw a new upstream release of Minetest (version 5.3.), a multi-player sandbox game similar to Minecraft. A backport to buster-backports will follow shortly.
  • Asher Gordon helped release a new version of Berusky 2, a sokoban like logic game but in 3D. The game received several improvements including bug fixes, code polishing and a new way to access the data files. Previously those files were all packed in a special container format but now they can be accessed directly without someone having to rely on some sort of unarchiver. I have uploaded this version as 0.12-1 to Debian unstable.
  • I tested an upstream patch for empire to address the build failure with GCC 10. This one is a better solution than the currently implemented workaround and I expect it to be included in the next upstream release.
  • I fixed two FTBFS in simutrans-pak64 and simutrans-pak128.britain, two addon packages for the simulation game simutrans.

Debian Java

  • New upstream versions this month: hsqldb, libpdfbox2-java, jackson-jr, jackson-dataformat-xml and jackson-databind. The latter upload addressed several security vulnerabilites which have become rather minor because upstream has enabled safe default typing by default now. Nevertheless I have prepared a buster-security update as well which is already available in buster-proposed-updates.

Misc

  • I packaged new versions of wabt, privacybadger and binaryen and applied another upstream patch for xarchiver to address the incomplete fix for Debian bug #959914, to better handle encrypted multi-volume 7zip archives.
  • By popular request I uploaded imlib2 version 1.6 to buster-backports because the image library supports the webp format now.

Debian LTS

This was my 52. month as a paid contributor and I have been paid to work 60 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2278-1. Issued a security update for squid3 fixing 19 CVE.
  • DLA-2279-1. Issued a security update for tomcat8 fixing 2 CVE.
  • Prepared and uploaded a stretch-pu update for jackson-databind fixing 20 CVE. (#964727)
  • Synced the proftpd-dfsg version from Jessie with Stretch to address a memory leak which leads to a denial-of-service and correct the version number to make seemless updates work.
  • Prepared the security update for imagemagick triaging and/or fixing 76 CVE.
  • Worked on updating the database about embedded code copies to determine how packages are affected by security vulnerabilities in embedded code copies. This included a) compiling a list of important packages which are regular affected by CVE, b) investigating if embedded code copies are present, c) determining the possible impact of a security vulnerability in those embedded code copies, d) writing a script that automates printing those findings on demand.

Thanks for reading and see you next time.

CryptogramEnigma Machine for Sale

A four-rotor Enigma machine -- with rotors -- is up for auction.

CryptogramHalf a Million IoT Passwords Leaked

It is amazing that this sort of thing can still happen:

...the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Telnet? Default passwords? In 2020?

We have a long way to go to secure the IoT.

EDITED TO ADD (7/14): Apologies, but I previously blogged this story in January.

Worse Than FailureRepresentative Line: An Exceptional Leader

IniTech’s IniTest division makes a number of hardware products, like a protocol analyzer which you can plug into a network and use to monitor data in transport. As you can imagine, it involves a fair bit of software, and it involves a fair bit of hardware. Since it’s a testing and debugging tool, reliability, accuracy, and stability are the watchwords of the day.

Which is why the software development process was overseen by Russel. Russel was the “Alpha Geek”, blessed by the C-level to make sure that the software was up to snuff. This lead to some conflict- Russel had a bad habit of shoulder-surfing his fellow developers and telling them what to type- but otherwise worked very well. Foibles aside, Russel was technically competent, knew the problem domain well, and had a clean, precise, and readable coding style which all the other developers tried to imitate.

It was that last bit which got Ashleigh’s attention. Because, scattered throughout the entire C# codebase, there are exception handlers which look like this:

try
{
	// some code, doesn't matter what
	// ...
}
catch (Exception ex)
{
   ex = ex;
}

This isn’t the sort of thing which one developer did. Nearly everyone on the team had a commit like that, and when Ashleigh asked about it, she was told “It’s just a best practice. We’re following Russel’s lead. It’s for debugging.”

Ashleigh asked Russel about it, but he just grumbled and had no interest in talking about it beyond, “Just… do it if it makes sense to you, or ignore it. It’s not necessary.”

If it wasn’t necessary, why was it so common in the codebase? Why was everyone “following Russel’s lead”?

Ashleigh tracked down the original commit which started this pattern. It was made by Russel, but the exception handler had one tiny, important difference:

catch (Exception ex)
{
   ex = ex; //putting this here to set a breakpoint
}

Yes, this was just a bit of debugging code. It was never meant to be committed. Russel pushed it into the main history by accident, and the other developers saw it, and thought to themselves, “If Russel does it, it must be the right thing to do,” and started copying him.

By the time Russel noticed what was going on, it was too late. The standard had been set while he wasn’t looking, and whether it was ego or cowardice, Russel just could never get the team to follow his lead away from the pointless pattern.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRussell Coker: Debian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked – it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel.

Here are the instructions on how to do it.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp

Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

apt install qemu-system-ppc qemu-user-static

update-binfmts --display

# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1

debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

echo ppc64 > /mnt/tmp/etc/hostname

# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1

chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd

cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys

rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.

#!/bin/bash
set -e

KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"

# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"

# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"

# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"

kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET

,

Krebs on SecurityBreached Data Indexer ‘Data Viper’ Hacked

Data Viper, a security startup that provides access to some 15 billion usernames, passwords and other information exposed in more than 8,000 website breaches, has itself been hacked and its user database posted online. The hackers also claim they are selling on the dark web roughly 2 billion records Data Viper collated from numerous breaches and data leaks, including data from several companies that likely either do not know they have been hacked or have not yet publicly disclosed an intrusion.

The apparent breach at St. Louis, Mo. based Data Viper offers a cautionary and twisted tale of what can happen when security researchers seeking to gather intelligence about illegal activity online get too close to their prey or lose sight of their purported mission. The incident also highlights the often murky area between what’s legal and ethical in combating cybercrime.

Data Viper is the brainchild of Vinny Troia, a security researcher who runs a cyber threat intelligence company called Night Lion Security. Since its inception in 2018, Data Viper has billed itself as a “threat intelligence platform designed to provide organizations, investigators and law enforcement with access to the largest collection of private hacker channels, pastes, forums and breached databases on the market.”

Many private companies sell access to such information to vetted clients — mainly law enforcement officials and anti-fraud experts working in security roles at major companies that can foot the bill for these often pricey services.

Data Viper has sought to differentiate itself by advertising “access to private and undisclosed breach data.” As KrebsOnSecurity noted in a 2018 story, Troia has acknowledged posing as a buyer or seller on various dark web forums as a way to acquire old and newly-hacked databases from other forum members.

But this approach may have backfired over the weekend, when someone posted to the deep web a link to an “e-zine” (electronic magazine) describing the Data Viper hack and linking to the Data Viper user base. The anonymous poster alleged he’d been inside Data Viper for months and had exfiltrated hundreds of gigabytes of breached data from the service without notice.

The intruder also linked to several dozen new sales threads on the dark web site Empire Market, where they advertise the sale of hundreds of millions of account details from dozens of leaked or hacked website databases that Data Viper allegedly acquired via trading with others on cybercrime forums.

An online post by the attackers who broke into Data Viper.

Some of the databases for sale tie back to known, publicly reported breaches. But others correspond to companies that do not appear to have disclosed a security incident. As such, KrebsOnSecurity is not naming most of those companies and is currently attempting to ascertain the validity of the claims.

KrebsOnSecurity did speak with Victor Ho, the CEO of Fivestars.com, a company that helps smaller firms run customer loyalty programs. The hackers claimed they are selling 44 million records taken from Fivestars last year. Ho said he was unaware of any data security incident and that no such event had been reported to his company, but that Fivestars is now investigating the claims. Ho allowed that the number of records mentioned in the dark web sales thread roughly matches the number of users his company had last year.

But on Aug. 3, 2019, Data Viper’s Twitter account casually noted, “FiveStars — 44m breached records added – incl Name, Email, DOB.” The post, buried among a flurry of similar statements about huge caches of breached personal information added to Data Viper, received hardly any attention and garnered just one retweet.

GNOSTIC PLAYERS, SHINY HUNTERS

Reached via Twitter, Troia acknowledged that his site had been hacked, but said the attackers only got access to the development server for Data Viper, and not the more critical production systems that power the service and which house his index of compromised credentials.

Troia said the people responsible for compromising his site are the same people who hacked the databases they are now selling on the dark web and claiming to have obtained exclusively from his service.

What’s more, Troia believes the attack was a preemptive strike in response to a keynote he’s giving in Boston this week: On June 29, Troia tweeted that he plans to use the speech to publicly expose the identities of the hackers, who he suspects are behind a large number of website break-ins over the years.

Hacked or leaked credentials are prized by cybercriminals engaged in “credential stuffing,” a rampant form of cybercrime that succeeds when people use the same passwords across multiple websites. Armed with a list of email addresses and passwords from a breached site, attackers will then automate login attempts using those same credentials at hundreds of other sites.

Password re-use becomes orders of magnitude more dangerous when website developers engage in this unsafe practice. Indeed, a January 2020 post on the Data Viper blog suggests credential stuffing is exactly how the group he plans to discuss in his upcoming talk perpetrated their website compromises.

In that post, Troia wrote that the hacker group, known variously as “Gnostic Players” and “Shiny Hunters,” plundered countless website databases using roughly the same method: Targeting developers using credential stuffing attacks to log into their GitHub accounts.

“While there, they would pillage the code repositories, looking for AWS keys and similar credentials that were checked into code repositories,” Troia wrote.

Troia said the intrusion into his service wasn’t the result of the credential re-use, but instead because his developer accidentally left his credentials exposed in documents explaining how customers can use Data Viper’s application programming interface.

“I will say the irony of how they got in is absolutely amazing,” Troia said. “But all of this stuff they claim to be selling is [databases] they were already selling. All of this is from Gnostic players. None of it came from me. It’s all for show to try and discredit my report and my talk.”

Troia said he didn’t know how many of the databases Gnostic Players claimed to have obtained from his site were legitimate hacks or even public yet.

“As for public reporting on the databases, a lot of that will be in my report Wednesday,” he said. “All of my ‘reporting’ goes to the FBI.”

SMOKE AND MIRRORS

The e-zine produced by the Data Viper hackers claimed that Troia used many nicknames on various cybercrime forums, including the moniker “Exabyte” on OGUsers, a forum that’s been closely associated with account takeovers.

In a conversation with KrebsOnSecurity, Troia acknowledged that this Exabyte attribution was correct, noting that he was happy about the exposure because it further solidified his suspicions about who was responsible for hacking his site.

This is interesting because some of the hacked databases the intruders claimed to have acquired after compromising Data Viper correspond to discoveries credited to Troia in which companies inadvertently exposed tens of millions of user details by leaving them publicly accessible online at cloud services like Amazon’s EC2.

For example, in March 2019, Troia said he’d co-discovered a publicly accessible database containing 150 gigabytes of plaintext marketing data — including 763 million unique email addresses. The data had been exposed online by Verifications.io, an email validation firm.

On Oct 12, 2019, a new user named Exabyte registered on RaidForums — a site dedicated to sharing hacked databases and tools to perpetrate credential stuffing attacks. That Exabyte account was registered less than two weeks after Troia created his Exabyte identity on OGUsers. The Exabyte on RaidForums posted on Dec. 26, 2019 that he was providing the community with something of a belated Christmas present: 200 million accounts leaked from Verifications.io.

“Verifications.io is finally here!” Exabyte enthused. “This release contains 69 of 70 of the original verifications.io databases, totaling 200+ million accounts.”

Exabyte’s offer of the Verifications.io database on RaidForums.

In May 2018, Troia was featured in Wired.com and many other publications after discovering that sales intelligence firm Apollo left 125 million email addresses and nine billion data points publicly exposed in a cloud service. As I reported in 2018, prior to that disclosure Troia had sought my help in identifying the source of the exposed data, which he’d initially and incorrectly concluded was exposed by LinkedIn.com. Rather, Apollo had scraped and collated the data from many different sites, including LinkedIn.

Then in August 2018, someone using the nickname “Soundcard” posted a sales thread to the now-defunct Kickass dark web forum offering the personal information of 212 million LinkedIn users in exchange for two bitcoin (then the equivalent of ~$12,000 USD). Incredibly, Troia had previously told me that he was the person behind that Soundcard identity on the Kickass forum.

Soundcard, a.k.a. Troia, offering to sell what he claimed was all of LinkedIn’s user data, on the Dark Web forum Kickass.

Asked about the Exabyte posts on RaidForums, Troia said he wasn’t the only one who had access to the Verifications.io data, and that the full scope of what’s been going on would become clearer soon.

“More than one person can have the same name ‘Exabyte,” Troia said. “So much from both sides you are seeing is smoke and mirrors.”

Smoke and mirrors, indeed. It’s entirely possible this incident is an elaborate and cynical PR stunt by Troia to somehow spring a trap on the bad guys. Troia recently published a book on threat hunting, and on page 360 (PDF) he describes how he previously staged a hack against his own site and then bragged about the fake intrusion on cybercrime forums in a bid to gather information about specific cybercriminals who took the bait — the same people, by the way, he claims are behind the attack on his site.

MURKY WATERS

While the trading of hacked databases may not technically be illegal in the United States, it’s fair to say the U.S. Department of Justice (DOJ) takes a dim view of those who operate services marketed to cybercriminals.

In January 2020, U.S. authorities seized the domain of WeLeakInfo.com, an online service that for three years sold access to data hacked from other websites. Two men were arrested in connection with that seizure. In February 2017, the Justice Department took down LeakedSource, a service that operated similarly to WeLeakInfo.

The DOJ recently released guidance (PDF) to help threat intelligence companies avoid the risk of prosecution when gathering and purchasing data from illicit sources online. The guidelines suggest that some types of intelligence gathering — particularly exchanging ill-gotten information with others on crime forums as a way to gain access to other data or to increase one’s status on the forum — could be especially problematic.

“If a practitioner becomes an active member of a forum and exchanges information and communicates directly with other forum members, the practitioner can quickly become enmeshed in illegal conduct, if not careful,” reads the Feb. 2020 DOJ document.

The document continues:

“It may be easier for an undercover practitioner to extract information from sources on the forum who have learned to trust the practitioner’s persona, but developing trust and establishing bona fides as a fellow criminal may involve offering useful information, services, or tools that can be used to commit crimes.”

“Engaging in such activities may well result in violating federal criminal law. Whether a crime has occurred usually hinges on an individual’s actions and intent. A practitioner must avoid doing anything that furthers the criminal objectives of others on the forums. Even though the practitioner has no intention of committing a crime, assisting others engaged in criminal conduct can constitute the federal offense of aiding and abetting.”

“An individual may be found liable for aiding and abetting a federal offense if her or she takes an affirmative act — even an act that is lawful on its own — that is in furtherance of the crime and conducted with the intent of facilitating the crime’s commission.”

Planet DebianAntoine Beaupré: Not recommending Purism

This is just a quick note to mention that I have updated my hardware documentation on the Librem 13v4 laptop. It has unfortunately turned into a rather lengthy (and ranty) piece about Purism. Let's just say that waiting weeks for your replacement laptop (yes, it died again) does wonders for creativity. To quote the full review:

TL;DR: I recommend people avoid the Purism brand and products. I find they have questionable politics, operate in a "libre-washing" fashion, and produce unreliable hardware. Will not buy again.

People who have read the article might want to jump directly to the new sections:

I have also added the minor section of the missing mic jack.

I realize that some folks (particularly at Debian) might still work at Purism, and that this article might be demoralizing for their work. If that is the case, I am sorry this article triggered you in any way and I hope this can act as a disclaimer. But I feel it is my duty to document the issues I am going through, as a user, and to call bullshit when I see it (let's face it, the anti-interdiction stuff and the Purism 5 crowd-funding campaign were total bullshit).

I also understand that the pandemic makes life hard for everyone, and probably makes a bad situation at Purism worse. But those problems existed before the pandemic happened. They were issues I had identified in 2019 and that I simply never got around to document.

I wish that people wishing to support the free software movement would spend their energy towards organisations that actually do honest work in that direction, like System76 and Pine64. And if you're going to go crazy with an experimental free hardware design, why not go retro with the MNT Reform.

In the meantime, if you're looking for a phone, I recommend you give the Fairphone a fair chance. It really is a "fair" (as in, not the best, but okay) phone that you can moderately liberate, and it actually frigging works. See also my hardware review of the FP2.

Update: this kind of blew up, for my standards: 10k visitors in ~24h while I usually get about 1k visitors after a week on any regular blog post. There were more discussions on the subject here:

Trigger warning: some of those threads include personal insults and explicitly venture into the free speech discussion, with predictable (sad) consequences...

Cory DoctorowFull Employment

This week’s podcast is a reading of Full Employment, my latest Locus column. It’s a counter to the argument about automation-driven unemployment – namely, that we will have hundreds of years of full employment facing the climate emergency and remediating the damage it wreaks. From relocating all our coastal cities to replacing aviation routes with high-speed rails to the caring and public health work for hundreds of millions of survivors of plagues, floods and fires, we are in no danger of running out of work. The real question is: how will we mobilize people to do the work needed to save our species and the only known planet in the entire universe that can sustain it?

MP3

Planet DebianBits from Debian: Debian Long Term Support (LTS) users and contributors survey

On July 18th Stretch LTS starts, offering two more years of security support to the Debian Stretch release. Stretch LTS will be the fourth iteration of LTS, following Squeeze LTS which started in 2014, Wheezy LTS in 2016 and Jessie LTS in 2018.

However, for the first time, we have prepared a small survey about our users and contributors, who they are and why they are using LTS.

Filling out the survey should take less than 10 minutes. We would really appreciate if you could participate in the survey online!

In two weeks (July 27th 2020) we will close the survey, so please don't hesitate and participate now! After that, there will be a followup email with the results.

More information about Debian LTS is available at https://wiki.debian.org/LTS, including generic contact information.

Click here to fill out the survey now!

CryptogramA Peek into the Fake Review Marketplace

A personal account of someone who was paid to buy products on Amazon and leave fake reviews.

Fake reviews are one of the problems that everyone knows about, and no one knows what to do about -- so we all try to pretend doesn't exist.

Worse Than FailureA Revolutionary Vocabulary

Changing the course of a large company is much like steering the Titanic: it's probably too late, it's going to end in tears, and for some reason there's going to be a spirited debate about the bouyancy and stability of the doors.

Shena works at Initech, which is already a gigantic, creaking organization on the verge of toppling over. Management recognizes the problems, and knows something must be done. They are not, however, particularly clear about what that something should actually be, so they handed the Project Management Office a budget, told them to bring in some consultants, and do something.

The PMO dutifully reviewed the list of trendy buzzwords in management magazines, evaluated their budget, and brought in a team of consultants to "Establish a culture of continuous process improvement" that would "implement Agile processes" and "break down silos" to ensure "high functioning teams that can successfully self-organize to meet institutional objectives on time and on budget" using "the best-in-class tools" to support the transition.

Any sort of organizational change is potentially scary, to at least some of the staff. No matter how toxic or dysfunctional an organization is, there's always someone who likes the status quo. There was a fair bit of resistance, but the consultants and PMO were empowered to deal with them, laying off the fortunate, or securing promotions to vaguely-defined make-work jobs for the deeply unlucky.

There were a handful of true believers, the sort of people who had landed in their boring corporate gig years before, and had spent their time gently suggesting that things could possibly be better, slightly. They saw the changes as an opportunity, at least until they met the reality of trying to acutally commit to changes in an organization the size of Initech.

The real hazard, however, were the members of the Project Management Office who didn't actually care about Initech, their peers, or process change: they cared about securing their own little fiefdom of power. People like Debbie, who before the consultants came, had created a series of "Project Checkpoint Documents". Each project was required to fill out the 8 core documents, before any other work began, and Debbie was the one who reviewed them- which meant projects didn't progress without her say-so. Or Larry, who was a developer before moving into project management, and thus was in charge of the code review processes for the entire company, despite not having written anything in a language newer than COBOL85.

Seeing that the organizational changes would threaten their power, people like Debbie or Larry did the only thing they could do: they enthusiastically embraced the changes and labeled themselves the guardians of the revolution. They didn't need to actually do anything good, they didn't need to actually facilitate the changes, they just needed to show enthusiasm and look busy, and generate the appearance that they were absolutely critical to the success of the transition.

Debbie, specifically, got herself very involved in driving the adoption of Jira as their ticket tracking tool, instead of the hodge-podge of Microsoft Project, spreadsheets, emails, and home-grown ticketing systems. Since this involved changing the vocubulary they used to talk about projects, it meant Debbie could spend much of her time policing the language used to describe projects. She ran trainings to explain what an "Epic" or a "Story" were, about how to "rightsize stories so you can decompose them into actionable tasks". But everything was in flux, which meant the exact way Initech developers were meant to use Jira kept changing, almost on a daily basis.

Which is why Shena eventually received this email from the Project Management Office.

Teams,

As part of our process improvement efforts, we'll be making some changes to how we track work in JIRA. Epics are now to only be created by leadership. They will represent mission-level initiatives that we should all strive for. For all development work tracking, the following shall be the process going forward to account for the new organizational communication directive:

  • Treat Features as Epics
  • Treat Stories as Features
  • Treat Tasks as Stories
  • Treat Sub-tasks as Tasks
  • If you need Sub-tasks, create a spreadsheet to track them within your team.

Additionally, the following is now reflected in the status workflows and should be adhered to:

  • Features may not be deleted once created. Instead, use the Cancel functionality.
  • Cancelled tasks will be marked as Done
  • Done tasks should now be marked as Complete

As she read this glorious and transcended piece of Newspeak, Shena couldn't help but wonder about her laid off co-workers, and wonder if perhaps she shouldn't join them.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianAntoine Beaupré: On contact tracing apps

I have strong doubts about the efficiency of any tracing app of the sort, and even less in the context where it is unlikely that a majority of the population will use it.

There's also the problem that this app would need to work on Apple phones, or be incompatible with them, and cause significant "fracture" between those who have access to technology, and those who haven't. See this text for more details.

Such an app would be a security and privacy liability at no benefit to public health. There are better options, see for this research on hardware tokens. But I doubt any contact tracing app or hardware will actually work anyways.

I am a computer engineer with more than 20 years of experience in the domain, and I have been following this question closely.

Please don't do this.

I wrote the above in a response to the Québec government's survey about a possible tracing app.

Update: a previous version of this article was titled plainly "on contact tracing". In case that was not obvious, I definitely do not object to contact tracing per se. I believe it's a fundamental, critical, and important part of fighting the epidemic and I think we should do it. I do not believe any engineer has found a proper way of doing it with "apps" so far, but I do not deny the utility and importance of "contact tracing" itself. Apologies for the confusion.


Pour une raison que je m'explique mal, le sondage m'été envoyé en anglais, et j'ai donc écrit ma réponse dans la langue de Shakespeare au lieu de celle de molière... Je serai heureux de fournir une traduction française à ceux ou celles qui en ont besoin...

Planet DebianEnrico Zini: Police brutality links

I was a police officer for nearly ten years and I was a bastard. We all were.
We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilità sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, più militari dell’Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altresì, documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove morì la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda è dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi è la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso è stato oggetto di grande attenzione mediatica e ha ispirato un documentario, È stato morto un ragazzo.[1][5]
Federico Aldrovandi (17 July 1987 in Ferrara – 25 September 2005 in Ferrara) was an Italian student, who was killed by four policemen.[1]
24 Giugno 2020

Planet DebianEvgeni Golov: Using Ansible Molecule to test roles in monorepos

Ansible Molecule is a toolkit for testing Ansible roles. It allows for easy execution and verification of your roles and also manages the environment (container, VM, etc) in which those are executed.

In the Foreman project we have a collection of Ansible roles to setup Foreman instances called forklift. The roles vary from configuring Libvirt and Vagrant for our CI to deploying full fledged Foreman and Katello setups with Proxies and everything. The repository also contains a dynamic Vagrant file that can generate Foreman and Katello installations on all supported Debian, Ubuntu and CentOS platforms using the previously mentioned roles. This feature is super helpful when you need to debug something specific to an OS/version combination.

Up until recently, all those roles didn't have any tests. We would run ansible-lint on them, but that was it.

As I am planning to do some heavier work on some of the roles to enhance our upgrade testing, I decided to add some tests first. Using Molecule, of course.

Adding Molecule to an existing role is easy: molecule init scenario -r my-role-name will add all the necessary files/examples for you. It's left as an exercise to the reader how to actually test the role properly as this is not what this post is about.

Executing the tests with Molecule is also easy: molecule test. And there are also examples how to integrate the test execution with the common CI systems.

But what happens if you have more than one role in the repository? Molecule has support for monorepos, however that is rather limited: it will detect the role path correctly, so roles can depend on other roles from the same repository, but it won't find and execute tests for roles if you run it from the repository root. There is an undocumented way to set MOLECULE_GLOB so that Molecule would detect test scenarios in different paths, but I couldn't get it to work nicely for executing tests of multiple roles and upstream currently does not plan to implement this. Well, bash to the rescue!

for roledir in roles/*/molecule; do
    pushd $(dirname $roledir)
    molecule test
    popd
done

Add that to your CI and be happy! The CI will execute all available tests and you can still execute those for the role you're hacking on by just calling molecule test as you're used to.

However, we can do even better.

When you initialize a role with Molecule or add Molecule to an existing role, there are quite a lot of files added in the molecule directory plus an yamllint configuration in the role root. If you have many roles, you will notice that especially the molecule.yml and .yamllint files look very similar for each role.

It would be much nicer if we could keep those in a shared place.

Molecule supports a "base config": a configuration file that gets merged with the molecule.yml of your project. By default, that's ~/.config/molecule/config.yml, but Molecule will actually look for a .config/molecule/config.yml in two places: the root of the VCS repository and your HOME. And guess what? The one in the repository wins (that's not yet well documented). So by adding a .config/molecule/config.yml to the repository, we can place all shared configuration there and don't have to duplicate it in every role.

And that .yamllint file? We can also move that to the repository root and add the following to Molecule's (now shared) configuration:

lint: yamllint --config-file ${MOLECULE_PROJECT_DIRECTORY}/../../.yamllint --format parsable .

This will define the lint action as calling yamllint with the configuration stored in the repository root instead of the project directory, assuming you store your roles as roles/<rolename>/ in the repository.

And that's it. We now have a central place for our Molecule and yamllint configurations and only need to place role-specific data into the role directory.

Planet DebianDirk Eddelbuettel: drat 0.1.7: New functionality

drat user

A new version of drat arrived on CRAN yesterday. Once again, this release is mostly the work of Felix Ernst who extended some work from the previous release, and added support for repository updates (outside of package insertion) and more.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

As your mother told you: Friends don’t let friends install random git commit snapshots. Rolled-up releases it is. drat is easy to use, documented by five vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.7 (2020-07-10)

  • Functions insertPackages, archivePackages and prunePackages are now vectorised (Patrick Schratz and Felix Ernst in #93, #100).

  • The new functionality is supported by unit tests (Felix Ernst in #93, and #102 fixing #101).

  • Added new function updateRepo (Felix Ernst in #95, #97).

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianSimon Quigley: Adventures in Writing

The Linux community is a fascinating and powerful space.

When I joined the Ubuntu project approximately five years ago, I (vaguely at the time) understood that there was a profound sense of community and passion everywhere that is difficult to find in other spaces. My involvement has increased, and so has my understanding. I had thought of starting a blog as a means of conveying the information that I stumbled across, but my writing skills were very crude and regrettable, being in my early teenage years.

I have finally decided to take the leap. In this blog, I would like to occasionally provide updates on my work, either through focused deep dives on a particular topic, or broad updates on low hanging fruit that has been eliminated. While the articles may be somewhat spontaneous, I decided that an initial post was in order to explain my goals. Feel free to subscribe for more detailed posts in the future, as there are many more to come.

CryptogramFriday Squid Blogging: China Closing Its Squid Spawning Grounds

China is prohibiting squid fishing in two areas -- both in international waters -- for two seasons, to give squid time to recover and reproduce.

This is the first time China has voluntarily imposed a closed season on the high seas. Some experts regard it as an important step forward in China's management of distant-water fishing (DWF), and crucial for protecting the squid fishing industry. But others say the impact will be limited and that stronger oversight of fishing vessels is needed, or even a new fisheries management body specifically for squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

Planet DebianEnrico Zini: Wait until a command opened a file

In my last post I wrote:

The sleep 0.3s is needed because xdg-open exits right after starting the program, and when invoked by mutt it means that mutt could delete the attachment before evince has a chance to open it. I had to use the same workaround for sensible-browser, since the same happens when a browser opens a document in an existing tab. I feel like writing some wrapper about all this that forks the viewer, then waits for an IN_OPEN event on its argument via inotify before exiting.

I wrote it: https://github.com/spanezz/waitused/

$ ./waitused --help
usage: waitused [-h] path ...

Run a command exiting only after it quits and a given file has been opened and
closed

positional arguments:
  path        file to monitor
  command     command to run

optional arguments:
  -h, --help  show this help message and exit

This works around situations like mutt deleting the temporary attachment file after run-mailcap is run, while run-mailcap runs a program that backgrounds before opening its input file.

Example

waitused file.pdf xdg-open file.pdf
waitused file.pdf run-mailcap file.pdf

Example ~/.mailcap entry

application/pdf; waitused -- %s xdg-open %s; test=test -n "$DISPLAY"

Update: Teddy Hogeborn pointed out that the initial mailcap entry would fail on files starting with a dash. I added -- for waitused, but unfortunately there seems to be no way at the moment to have xdg-open open files starting with a dash (see: #964949

Planet DebianIain R. Learmonth: Light OpenStreetMapping with GPS

Now that lockdown is lifting a bit in Scotland, I’ve been going a bit further for exercise. One location I’ve been to a few times is Tyrebagger Woods. In theory, I can walk here from my house via Brimmond Hill although I’m not yet fit enough to do that in one go.

Instead of following the main path, I took a detour along some route that looked like it wanted to be a path but it hadn’t been maintained for a while. When I decided I’d had enough of this, I looked for a way back to the main path but OpenStreetMap didn’t seem to have the footpaths mapped out here yet.

I’ve done some OpenStreetMap surveying before so I thought I’d take a look at improving this, and moving some of the tracks on the map closer to where they are in reality. In the past I’ve used OSMTracker which was great, but now I’m on iOS there doesn’t seem to be anything that matches up.

My new handheld radio, a Kenwood TH-D74 has the ability to record GPS logs so I thought I’d give this a go. It records the logs to the SD card with one file per session. It’s a very simple logger that records the NMEA strings as they are received. The only sentences I see in the file are GPGGA (Global Positioning System Fix Data) and GPRMC (Recommended Minimum Specific GPS/Transit Data).

I tried to import this directly with JOSM but it seemed to throw an error and crash. I’ve not investigated this, but I thought a way around could be to convert this to GPX format. This was easier than expected:

apt install gpsbabel
gpsbabel -i nmea -f "/sdcard/KENWOOD/TH-D74/GPS_LOG/25062020_165017.nme" \
                 -o gpx,gpxver=1.1 -F "/tmp/tyrebagger.gpx"

This imported into JOSM just fine and I was able to adjust some of the tracks to better fit where they actually are.

I’ll take the radio with me when I go in future and explore some of the other paths, to see if I can get the whole woods mapped out nicely. It is fun to just dive into the trees sometimes, along the paths that looks a little forgotten and overgrown, but also it’s nice to be able to find your way out again when you get lost.

CryptogramBusiness Email Compromise (BEC) Criminal Ring

A criminal group called Cosmic Lynx seems to be based in Russia:

Dubbed Cosmic Lynx, the group has carried out more than 200 BEC campaigns since July 2019, according to researchers from the email security firm Agari, particularly targeting senior executives at large organizations and corporations in 46 countries. Cosmic Lynx specializes in topical, tailored scams related to mergers and acquisitions; the group typically requests hundreds of thousands or even millions of dollars as part of its hustles.

[...]

For example, rather than use free accounts, Cosmic Lynx will register strategic domain names for each BEC campaign to create more convincing email accounts. And the group knows how to shield these domains so they're harder to trace to the true owner. Cosmic Lynx also has a strong understanding of the email authentication protocol DMARC and does reconnaissance to assess its targets' specific system DMARC policies to most effectively circumvent them.

Cosmic Lynx also drafts unusually clean and credible-looking messages to deceive targets. The group will find a company that is about to complete an acquisition and contact one of its top executives posing as the CEO of the organization being bought. This phony CEO will then involve "external legal counsel" to facilitate the necessary payments. This is where Cosmic Lynx adds a second persona to give the process an air of legitimacy, typically impersonating a real lawyer from a well-regarded law firm in the United Kingdom. The fake lawyer will email the same executive that the "CEO" wrote to, often in a new email thread, and share logistics about completing the transaction. Unlike most BEC campaigns, in which the messages often have grammatical mistakes or awkward wording, Cosmic Lynx messages are almost always clean.

Sam VargheseRacism: Holding and Rainford-Brent do some plain speaking

Michael Anthony Holding, one of the feared West Indies pace bowlers from the 1970s and 1980s, bowled his best spell on 10 July, in front of the TV cameras.

Holding, in England to commentate on the Test series between England and the West Indies, took part in a roundtable on the Black Lives Matter protests which have been sweeping the world recently after an African-American man, George Floyd, was killed by a police officer in Minneapolis on May 25.

Holding speaks frankly, Very frankly. Along with former England cricketer Ebony Rainford-Brent, he spoke about the issues he had faced as a black man, the problems in cricket and how they could be resolved.

There was no bitterness in his voice, just audible pain and sadness. At one point, he came close to breaking down and later told one of the hosts that the memory of his mother being ostracised by her own family because she had married a very dark man had led to this.

Holding spoke of the need for education, to wipe out the centuries of conditioning that have resulted in black people knowing that white lives matter, while white people do not really care about black lives. He cited studies from American universities like Yale to make his points.

And much as white people will dismiss whatever he says, one could not escape the fact that here was a 66-year-old who had seen it all and some calling for a sane solution to the ills of racism.

He provided examples of racism from each of England, South Africa and Australia. In England, he cited the case when he was trying to flag down a cab while going home with his wife-to-be – a woman of Portuguese ancestry who is white. The driver had his meter up to indicate his cab was not occupied, but then on seeing Holding quickly offed the meter light and drove on. An Englishman of West Indian descent who recognised Holding, called out to him, “Hey Mikey, you have to put her in front.” To which Holding, characteristically, replied, “I would rather walk.”

In Australia, he cited a case during a tour; the West Indies teams were always put on a single floor in any hotel they stayed in. Holding said he and three of his fast bowling colleagues were coming down in a lift when it stopped at a floor on the way down. “There was a man waiting there,” Holding said. “He looked at us and did not get into the lift. That’s fine, maybe he was intimidated by the presence of four, big black men.

“But then, just before the lift doors closed, he shouted a racial eipthet at us.

And in South Africa, Holding cited a case when he and his Portuguese friend had gone to a hotel to stay. Someone came to him and was getting the details to book him in; meanwhile some other hotel staffer went to his companion and tried to book her in. “To their way of thinking, she could not possibly be with me, because she was white,” was Holding’s comment. “After all, I am black, am I not?”

Rainford-Brent, who took part in a formal video with Holding, also ventilated the problems that black women cricketers faced in England and spoke with tremendous feeling about the lack of people of colour at any level of the sport.

She was in tears occasionally as she spoke, as frankly as Holding, but again with no bitterness of the travails black people have when they join up to play cricket.

One only hopes that the talk does not end there and something is done about equality. Sky Sports, the broadcaster which ran this remarkable and unusual discussion, has pledged put 30 million pounds into efforts to narrow the gap. Holding’s view was that if enough big companies got involved then the gap would close that much faster.

If he has hope after what he has endured, then there is no reason why the rest of us should not.

Worse Than FailureError'd: They Said the Math Checks Out!

"So...I guess...they want me to spend more?" Angela A. writes.

 

"The '[object Object]' feature must be extremely rare and expensive considering that none of the phones in the list have it!" Jonathan writes.

 

Joel T. wrote, "I was checking this Covid-19 dashboard to see if it was safe to visit my family and well, I find it really thoughtful of them to cover the Null states, where I grew up."

 

"Thankfully after my appointment, I discovered I am healthier than my doctor's survey system," writes Paul T.

 

Peter C. wrote, "I am so glad that I went to college in {Other_Region}."

 

"I tried out this Excel currency converter template and it scrapes MSN.com/Money for up to date exchange rates," Kevin J. writes, "but, I think someone updated the website without thinking about this template."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianReproducible Builds (diffoscope): diffoscope 151 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 151. This version includes the following changes:

[ Chris Lamb]

* Improvements and bug fixes:

  - Pass the absolute path when extracting members from SquashFS images as we
    run the command with our working directory set to the temporary
    directory. (Closes: #964365, reproducible-builds/diffoscope#189)
  - Increase the minimum length of the output from strings(1) to 8 characters
    to avoid unnecessary diff noise. (Re. reproducible-builds/diffoscope#148)

* Logging improvements:

  - Fix the compare_files message when the file does not have a literal name.
  - Reduce potential log noise by truncating the has_some_content messages.

* Codebase changes:

  - Clarify use of a "null" diff in order to remember an exit code.
  - Don't alias a variable when don't end up it; use "_" instead.
  - Use a  "NullChanges" file to represent missing data in the Debian package
    comparator.
  - Update some miscellaneous terms.

You find out more by visiting the project homepage.

,

Dave HallLogging Step Functions to CloudWatch

Many AWS Services log to CloudWatch. Some do it out of the box, others need to be configured to log properly. When Amazon released Step Functions, they didn’t include support for logging to CloudWatch. In February 2020, Amazon announced StepFunctions could now log to CloudWatch. Step Functions still support CloudTrail logs, but CloudWatch logging is more useful for many teams.

Users need to configure Step Functions to log to CloudWatch. This is done on a per State Machine basis. Of course you could click around he console to enable it, but that doesn’t scale. If you use CloudFormation to manage your Step Functions, it is only a few extra lines of configuration to add the logging support.

In my example I will assume you are using YAML for your CloudFormation templates. I’ll save my “if you’re using JSON for CloudFormation you’re doing it wrong” rant for another day. This is a cut down example from one of my services:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: StepFunction with Logging Example.
Parameters:
Resources:
  StepFunctionExecRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: !Sub "states.${AWS::Region}.amazonaws.com"
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: StepFunctionExecRole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - lambda:InvokeFunction
            - lambda:ListFunctions
            Resource: !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:my-lambdas-namespace-*"
          - Effect: Allow
            Action:
            - logs:CreateLogDelivery
            - logs:GetLogDelivery
            - logs:UpdateLogDelivery
            - logs:DeleteLogDelivery
            - logs:ListLogDeliveries
            - logs:PutResourcePolicy
            - logs:DescribeResourcePolicies
            - logs:DescribeLogGroups
            Resource: "*"
  MyStateMachineLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /aws/stepfunction/my-step-function
      RetentionInDays: 14
  DashboardImportStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      StateMachineName: my-step-function
      StateMachineType: STANDARD
      LoggingConfiguration:
        Destinations:
          - CloudWatchLogsLogGroup:
             LogGroupArn: !GetAtt MyStateMachineLogGroup.Arn
        IncludeExecutionData: True
        Level: ALL
      DefinitionString:
        !Sub |
        {
          ... JSON Step Function definition goes here
        }
      RoleArn: !GetAtt StepFunctionExecRole.Arn

The key pieces in this example are the second statement in the IAM Role with all the logging permissions, the LogGroup defined by MyStateMachineLogGroup and the LoggingConfiguration section of the Step Function definition.

The IAM role permissions are copied from the example policy in the AWS documentation for using CloudWatch Logging with Step Functions. The CloudWatch IAM permissions model is pretty weak, so we need to grant these broad permissions.

The LogGroup definition creates the log group in CloudWatch. You can use what ever value you want for the LogGroupName. I followed the Amazon convention of prefixing everything with /aws/[service-name]/ and then appended the Step Function name. I recommend using the RetentionInDays configuration. It stops old logs sticking around for ever. In my case I send all my logs to ELK, so I don’t need to retain them in CloudWatch long term.

Finally we use the LoggingConfiguration to tell AWS where we want to send out logs. You can only specify a single Destinations. The IncludeExecutionData determines if the inputs and outputs of each function call is logged. You should not enable this if you are passing sensitive information between your steps. The verbosity of logging is controlled by Level. Amazon has a page on Step Function log levels. For dev you probably want to use ALL to help with debugging but in production you probably only need ERROR level logging.

I removed the Parameters and Output from the template. Use them as you need to.

CryptogramTraffic Analysis of Home Security Cameras

Interesting research on home security cameras with cloud storage. Basically, attackers can learn very basic information about what's going on in front of the camera, and infer when there is someone home.

News article.

Slashdot thread.

Planet DebianEnrico Zini: Mime type associations

The last step of my laptop migration was to fix mime type associations, that seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying.

The state of my system after a fresh install, is that, for application/pdf, xdg-open (used for example by pcmanfm) runs inkscape, and run-mailcap (used for example by neomutt) runs the calibre ebook viewer.

It looks like there are at least two systems to understand, debug and fix, instead of one.

xdg-open

This comes from package xdg-utils, and works using .desktop files:

# This runs inkscape
$ xdg-open file.pdf

There is a tool called xdg-mime that queries what .desktop file is associated with a given mime type:

$ xdg-mime query default application/pdf
inkscape.desktop

You can use xdg-mime default to change an association, and it works nicely:

$ xdg-mime default org.gnome.Evince.desktop application/pdf
$ xdg-mime query default application/pdf
org.gnome.Evince.desktop

However, if you accidentally mistype the name of the .desktop file, it won't complain and it will silently reset the association to the arbitrary default:

$ xdg-mime default org.gnome.Evince.desktop application/pdf
$ xdg-mime query default application/pdf
org.gnome.Evince.desktop
$ xdg-mime default evince.desktop application/pdf
$ echo $?
0
$ xdg-mime query default application/pdf
inkscape.desktop

You can use a GUI like xfce4-mime-settings from the xfce4-settings package to perform the same kind of changes avoiding typing mistakes.

The associations seem to be saved in ~/.config/mimeapps.list

run-mailcap

This comes from the package mime-support

You can test things by running it using --norun:

$ run-mailcap --norun file.pdf
ebook-viewer file.pdf

run-mailcap uses the ~/.mailcap and /etc/mailcap to map mime types to commands. This is what's in the system default:

$ grep application/pdf /etc/mailcap
application/pdf; ebook-viewer %s; test=test -n "$DISPLAY"
application/pdf; calibre %s; test=test -n "$DISPLAY"
application/pdf; gimp-2.10 %s; test=test -n "$DISPLAY"
application/pdf; evince %s; test=test -n "$DISPLAY"

To fix this, I copypasted the evince line into ~/.mailcap, and indeed it gets used:

$ run-mailcap --norun file.pdf
evince file.pdf

There is a /etc/mailcap.order file providing a limited way to order entries in /etc/mailcap, but it can only be manipulated system-wide, and cannot be used for user preferences.

Sadly, this means that if a package changes its mailcap invocation because of, say, a security issue in the former one, the local override will never get fixed.

I am really not comfortable about that. As a workaround, I put this in my ~/.mailcap:

application/pdf; xdg-open %s && sleep 0.3s; test=test -n "$DISPLAY"

The sleep 0.3s is needed because xdg-open exits right after starting the program, and when invoked by mutt it means that mutt could delete the attachment before evince has a chance to open it. I had to use the same workaround for sensible-browser, since the same happens when a browser opens a document in an existing tab. I feel like writing some wrapper about all this that forks the viewer, then waits for an IN_OPEN event on its argument via inotify before exiting.

I wonder if there is any reason run-mailcap could not be implemented as a wrapper to xdg-open.

I reported #964723 elaborating on these thoughts.

Planet DebianEnrico Zini: Laptop migration

This laptop used to be extra-flat

My laptop battery started to explode in slow motion. HP requires 10 business days to repair my laptop under warranty, and I cannot afford that length of downtime.

Alternatively, HP quoted me 375€ + VAT for on-site repairs, which I tought was very funny.

For 376.55€ + VAT, which is pretty much exactly the same amount, I bought instead a refurbished ThinkPad X240 with a dual-core I5, 8G of RAM, 250G SSD, and a 1920x1080 IPS display, to use as a spare while my laptop is being repaired. I'd like to thank HP for giving me the opportunity to own a ThinkPad.

Since I'm migrating all my system to the spare and then (hopefully) back, I'm documenting what I need to be fully productive on new hardware.

Install Debian

A basic Debian netinst with no tasks selected is good enough to get going.

Note that if wifi worked in Debian Installer, it doesn't mean that it will work in the minimal system it installed. See here for instructions on quickly bringing up wifi on a newly installed minimal system.

Copy /home

A simple tar of /home is all I needed to copy my data over.

A neat way to do it was connecting the two laptops with an ethernet cable, and using netcat:

# On the source
tar -C / -zcf - home | nc -l -p 12345 -N
# On the target
nc 10.0.0.1 12345 | tar -C / -zxf -

Since the data travel unencrypted in this way, don't do it over wifi.

Install packages

I maintain a few simple local metapackages that depend on the packages I usually used.

I could just install those and let apt bring in their dependencies.

For the build dependencies of the programs I develop, I use mk-build-deps from the devscripts package to create metapackages that make sure they are installed.

Here's an extract from debian/control of the metapackage:

Source: enrico
Section: admin
Priority: optional
Maintainer: Enrico Zini <enrico@debian.org>
Build-Depends: debhelper (>= 11)
Standards-Version: 3.7.2.1

Package: enrico
Section: admin
Architecture: all
Depends:
  mc, mmv, moreutils, powertop, syncmaildir, notmuch,
  ncdu, vcsh, ddate, jq, git-annex, eatmydata,
  vdirsyncer, khal, etckeeper, moc, pwgen
Description: Enrico's working environment

Package: enrico-devel
Section: devel
Architecture: all
Depends:
  git, python3-git, git-svn, gitk, ansible, fabric,
  valgrind, kcachegrind, zeal, meld, d-feet, flake8, mypy, ipython3,
  strace, ltrace
Description: Enrico's development environment

Package: enrico-gui
Section: x11
Architecture: all
Depends:
  xclip, gnome-terminal, qalculate-gtk, liferea, gajim,
  mumble, sm, syncthing, virt-manager
Recommends: k3b
Description: Enrico's GUI environment

Package: enrico-sanity
Section: admin
Architecture: all
Conflicts: libapache2-mod-php, libapache2-mod-php5, php5, php5-cgi, php5-fpm, libapache2-mod-php7.0, php7.0, libphp7.0-embed, libphp-embed, libphp5-embed
Description: Enrico's sanity
 Metapackage with a list of packages that I do not want anywhere near my
 system.

System-wide customizations

I tend to avoid changing system-wide configuration as much as possible, so copying over /home and installing packages takes care of 99% of my needs.

There are a few system-wide tweaks I cannot do without:

  • setup postfix to send mail using my mail server
  • copy Network Manager system connections files in /etc/NetworkManager/system-connections/
  • update-alternatives --config editor

For postfix, I have a little ansible playbook that takes care of it.

Network Manager system connections need to be copied manually: a plain copy and a systemctl restart network-manager are enough. Note that Network Manager will ignore the files unless their owner and permissions are what it expects.

Fine tuning

Comparing the output of dpkg --get-selections between the old and the new system might highlight packages manually installed in a hurry and not added to the metapackages.

Finally, what remains is fixing the sad state of mimetype associations, which seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying.

Currently on my system, PDFs are opened in inkscape by xdg-open and in calibre by run-mailcap. Let's see how long it takes to figure this one out.

Worse Than FailureCodeSOD: Is It the Same?

A common source of bad code is when you have a developer who understands one thing very well, but is forced- either through organizational changes or the tides of history- to adapt to a new tool which they don’t understand. But a possibly more severe problem is modern developers not fully understanding why certain choices may have been made. Today’s code isn’t a WTF, it’s actually very smart.

Eric P was digging through some antique Fortran code, just exploring some retrocomputing history, and found a block which needed to check if two values were the same.

The normal way to do that in Fortran would be to use the .EQ. operator, e.g.:

LSAME = ( (LOUTP(IOUTP)).EQ.(LPHAS1(IOUTP)) )

Now, in this specific case, I happen to know that LOUTP(IOUTP) and LPHAS1(IOUTP) happen to be boolean expressions. I know this, in part, because of how the original developer actually wrote an equality comparison:

      LSAME = ((     LOUTP(IOUTP)).AND.(     LPHAS1(IOUTP)).OR.
               (.NOT.LOUTP(IOUTP)).AND.(.NOT.LPHAS1(IOUTP)) )

Now, Eric sent us two messages. In their first message:

This type of comparison appears in at least 5 different places and the result is then used in other unnecessarily complicated comparisons and assignments.

But that doesn’t tell the whole story. We need to understand the actual underlying purpose of this code. And the purpose of this block of code is to translate symbolic formula expressions to execute on Programmable Array Logic (PAL) devices.

PAL’s were an early form of programmable ROM, and to describe the logic you wanted them to perform, you had to give them instructions essentially in terms of gates. Essentially, you ’d throw a binary representation of the gate arrangements at the chip, and it would now perform computations for you.

So Eric, upon further review, followed up with a fresh message:

The program it is from was co-written by the manager of the project to create the PAL (Programmable Array Logic) device. So, of course, this is exactly, down to the hardware logic gate, how you would implement an equality comparison in a hardware PAL!
It’s all NOTs, ANDs, and ORs!

Programming is about building a model. Most of the time, we want our model to be clear to humans, and we focus on finding ways to describe that model in clear, unsurprising ways. But what’s “clear” and “unsurprising” can vary depending on what specifically we’re trying to model. Here, we’re modeling low-level hardware, really low-level, and what looks weird at first is actually pretty darn smart.

Eric also included a link to the code he was reading through, for the PAL24 Assembler.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Rondam RamblingsGame over for Hong Kong

The Washington Post reports: Early Wednesday, under a heavy police presence and before any public announcement about the matter, officials inaugurated the Office for Safeguarding National Security of the Central People’s Government in the Hong Kong Special Administrative Region at a ceremony that took place behind water-filled barricades. They played the Chinese national anthem and raised the

Worse Than FailureCodeSOD: A Private Matter

Tim Cooper was digging through the code for a trip-planning application. This particular application can plan a trip across multiple modes of transportation, from public transit to private modes, like rentable scooters or bike-shares.

This need to discuss private modes of transportation can lead to some… interesting code.

// for private: better = same
TIntSet myPrivates = getPrivateTransportSignatures(true);
TIntSet othersPrivates = other.getPrivateTransportSignatures(true);
if (myPrivates.size() != othersPrivates.size()
        || ! myPrivates.containsAll(othersPrivates)
        || ! othersPrivates.containsAll(myPrivates)) {
    return false;
}

This block of code seems to worry a lot about the details of othersPrivates, which frankly is a bad look. Mind your own business, code. Mind your own business.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

Planet DebianNoah Meyerhans: Setting environment variables for gnome-session

Am I missing something obvious? When did this get so hard?

In the old days, you configured your desktop session on a Linux system by editing the .xsession file in your home directory. The display manager (login screen) would invoke the system-wide xsession script, which would either defer to your personal .xsession script or set up a standard desktop environment. You could put whatever you want in the .xsession script, and it would be executed. If you wanted a specific window manager, you’d run it from .xsession. Start emacs or a browser or an xterm or two? .xsession. It was pretty easy, and super flexible.

For the past 25 years or so, I’ve used X with an environment started via .xsession. Early on it was fvwm with some programs, then I replaced fvwm with Window Maker (before that was even its name!), then switched to KDE. More recently (OK, like 10 years ago) I gradually replaced KDE with awesome and various custom widgets. Pretty much everything was based on a .xsession script, and that was fine. One particularly nice thing about it was that I could keep .xsession and any related helper programs in a git repository and manage changes over time.

More recently I decided to give Wayland and GNOME an honest look. This has mostly been fine, but everything I’ve been doing in .xsession is suddenly useless. OK, fine, progress is good. I’ll just use whatever new mechanisms exist. How hard can it be?

OK, so here we go. I am running GNOME. This isn’t so bad. Alt+F2 brings up the “Run Command” dialog. It’s a different keystroke than what I’m used to, but I can adapt. (Obviously I can reconfigure the key binding, and maybe someday I will, but that’s not the point here.) I have some executables in ~/bin. Oops, the run command dialog can’t find them. No problem, I just need to update the PATH variable that it sees. Hmmm… So how does one do that, anyway? GNOME has a help system, but searching that doesn’t doesn’t reveal anything. But that’s fine, maybe it’s inherited from the parent process. But there’s no xsession script equivalent, since this isn’t X anymore at all. The familiar stuff in /etc/X11/Xsession is no longer used. What’s the equivalent in Wayland? Turns out, there isn’t a shell script at all anymore, at least not in how Wayland and GNOME interact in Debian’s configuration, which seems fairly similar to how anybody else would set this up. The GNOME session runs from a systemd-managed user session.

Digging in to some web search results suggests that systemd provides a mechanism for setting some environment variables for services started by the user instance of the system. OK, so let’s create some files in ~/.config/environment.d and we should be good. Except no, this isn’t working. I can set some variables, but something is overriding PATH. I can create this file:

$ cat ~/.config/environment.d/01_path.conf
USER_INITIAL_PATH=${PATH}
PATH=${HOME}/bin:${HOME}/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER_CUSTOM_PATH=${PATH}

After logging in, the “Run a command” dialog still doesn’t see my PATH. So I use Alt+F2 and sh -c "env > /tmp/env" to capture the environment, and this is what I see:

USER_INITIAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games
USER_CUSTOM_PATH=/home/noahm/bin:/home/noahm/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

So, my environment.d file is there, and it’s getting looked at, but something else is clobbering my PATH later in the startup process. But what? Where? Why? The systemd docs don’t indicate that there’s anything special about PATH, and nothing in /lib/systemd/user-environment-generators/ seems to treat it specially. The string “PATH” doesn’t appear in /lib/systemd/user/ either. Looking for the specific value that’s getting assigned to PATH in /etc shows the only occurrence of it being in /etc/zsh/zshenv, so maybe that’s where it’s coming from? But that should only get set there if it’s otherwise unset or otherwise very minimally set. So I still have no idea where it’s coming from.

OK, so ignoring where my custom value is getting overridden, maybe what’s configured in /lib/systemd/user will point me in the right direction. systemd --user status suggests that the interesting part of my session is coming from gnome-shell-wayland.service. Can we use a standard systemd drop-in as documented in systemd.unit(5)? It turns out that we can. This file sets things up the way I want:

$ cat .config/systemd/user/gnome-shell-wayland.service.d/path.conf
[Service]
Environment=PATH=%h/bin:%h/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Is that right? It really doesn’t feel ideal to me. Systemd’s Environment directive can’t reference existing environment variables, and I can’t use conditionals to do things like add a directory to the PATH only if it exists, so it’s still a functional regression from what we had before. But at least it’s a text file, edited by hand, trackable in git, so that’s not too bad.

There are some people out there who hate systemd, and will cite this as an illustration of why. However, I’m not one of those people, and I very much like systemd as an init system. I’d be happy to throw away sysvinit scripts forever, but I’m not quite so happy with the state of .xsession’s replacements. Despite the similarities, I don’t think .xsession is entirely the same as SysV-style init scripts. The services running on a system are vastly more important than my personal .xsession, and systemd is far better at managing them than the pile of shell scripts used to set things up under sysvinit. Further, systemd the init system maintains compatibility with init scripts, so if you really want to keep using them, you can. As far as I can tell, though, systemd the user session manager does not seem to maintain compatibility with .xsession scripts, and that’s unfortunate.

I still haven’t figured out what was overriding the ~/.config/environment.d/ setting. Any ideas?

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.1.0: Now on Windows, With Parsers and Faster Still!

A smashing new RcppSimdJson release 0.1.0 containing several small updates to upstream simdjson (now at 0.4.6) in part triggered by very excisting work by Brendan who added actual parser from file and string—and together with Daniel upstream worked really hard to make Windows builds as well as complete upstream tests on our beloved (ahem) MinGW platform possible. So this version will, once the builders have caught up, give everybody on Windows a binary—with a JSON parser running circles around the (arguably more feature-rich and possibly easier-to-use) alternatives. Dave just tweeted a benchmark snippet by Brendan, the full set is at the bottom our issue ticket for this release.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators, which in its upstream release 0.4.0 improved once more (also see the following point releases). Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

As mentioned, this release expands the reach of the package to Windows, and adds new user-facing functions. A big thanks for most of this is owed to Brendan, so buy him a drink if you run across him. The full NEWS entry follows.

Changes in version 0.1.0 (2020-07-07)

  • Upgraded to simdjson 0.4.1 which adds upstream Windows support (Dirk in #27 closing #26 and #14, plus extensive work by Brendan helping upstream with mingw tests).

  • Upgraded to simdjson 0.4.6 with further upstream improvements (Dirk in #30).

  • Change Travis CI to build matrix over g++ 7, 8, 9, and 10 (Dirk in #31; and also Brendan in #32).

  • New JSON functions fparse and fload (Brendan in #32) closing #18 and #10).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: AsioHeaders 1.16.1-1 on CRAN

An updated version of the AsioHeaders package arrived on CRAN today (after a we days of “rest” in the incoming directory of CRAN). Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release brings a new upstream version. Its changes required a corresponding change in one of (only) three reverse depends which delayed the CRAN admisstion by a few days.

Changes in version 1.16.1-1 (2020-06-28)

  • Upgraded to Asio 1.16.1 (Dirk in #5).

  • Updated README.md with standard set of badges

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramIoT Security Principles

The BSA -- also known as the Software Alliance, formerly the Business Software Alliance (which explains the acronym) -- is an industry lobbying group. They just published "Policy Principles for Building a Secure and Trustworthy Internet of Things."

They call for:

  • Distinguishing between consumer and industrial IoT.
  • Offering incentives for integrating security.
  • Harmonizing national and international policies.
  • Establishing regularly updated baseline security requirements

As with pretty much everything else, you can assume that if an industry lobbying group is in favor of it, then it doesn't go far enough.

And if you need more security and privacy principles for the IoT, here's a list of over twenty.

Planet DebianGunnar Wolf: Raspberry Pi 4, now running your favorite distribution!

Great news, great news! New images available!Grab them while they are hot!

With lots of help (say, all of the heavy lifting) from the Debian Raspberry Pi Maintainer Team, we have finally managed to provide support for auto-building and serving bootable minimal Debian images for the Raspberry Pi 4 family of single-board, cheap, small, hacker-friendly computers!

The Raspberry Pi 4 was released close to a year ago, and is a very major bump in the Raspberry lineup; it took us this long because we needed to wait until all of the relevant bits entered Debian (mostly the kernel bits). The images are shipping a kernel from our Unstable branch (currently, 5.7.0-2), and are less tested and more likely to break than our regular, clean-Stable images. Nevertheless, we do expect them to be useful for many hackers –and even end-users– throughout the world.

The images we are generating are very minimal, they carry basically a minimal Debian install. Once downloaded, of course, you can install whatever your heart desires (because… Face it, if your heart desires it, it must free and of high quality. It must already be in Debian!)

Oh — And very important: Due to a change in the memory layout, if you get the 8GB model (currently the top-of-the-line RPi4), it will still not have USB support, due to a change in its memory layout (that means, no local keyboard/mouse ☹). We are working on getting it ironed out!

Worse Than FailureCodeSOD: Your Personal Truth

There are still some environments where C may not have easy access to a stdbool header file. That's easy to fix, of course. The basic pattern is to typedef an integer type as a boolean type, and then define some symbols for true and false. It's a pretty standard pattern, three lines of code, and unless you insist that FILE_NOT_FOUND is a boolean value, it's pretty hard to mess up.

Julien H was compiling some third-party C code, specifically in Visual Studio 2010, and as it turns out, VS2010 doesn't support C99, and thus doesn't have a stdbool. But, as stated, it's an easy pattern to implement, so the third party library went and implemented it:

#ifndef _STDBOOL_H_VS2010 #define _STDBOOL_H_VS2010 typedef int bool; static bool true = 1; static bool false = 0; #endif

We've asked many times, what is truth? In this case, we admit a very post-modern reality: what is "true" is not constant and unchanging, it cannot merely be enumerated, it must be variable. Truth can change, because here we've defined true and false as variables. And more than that, each person must identify their own truth, and by making these variables static, what we guarantee is that every .c file in our application can have its own value for truth. The static keyword, applied to a global variable, guarantees that each .c file gets its own scope.

I can only assume this header was developed by Jacques Derrida.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

Cory DoctorowFull Employment

My latest Locus column is “Full Employment,” in which I forswear “Fully Automated Luxury Communism” as totally incompatible with the climate emergency, which will consume 100%+ of all human labor for centuries to come.

https://locusmag.com/2020/07/cory-doctorow-full-employment/

This fact is true irrespective of any breakthroughs in AI OR geoengineering. Technological unemployment is vastly oversold and overstated (for example, that whole thing about truck drivers is bullshit).

https://journals.sagepub.com/doi/10.1177/0019793919858079

But even if we do manage to automate away all of jobs, the climate emergency demands unimaginably labor intensive tasks for hundreds of years – jobs like relocating every coastal city inland, or caring for hundreds of millions of refugees.

Add to those: averting the exinctions of thousands of species, managing wave upon wave of zoonotic and insect-borne plagues, dealing with wildfires and tornados, etc.

And geoengineering won’t solve this: we’ve sunk a lot of heat into the oceans. It’s gonna warm them up. That’s gonna change the climate. It’s not gonna be good. Heading this off doesn’t just involve repealing thermodynamics – it also requires a time-machine.

But none of this stuff is insurmountable – it’s just hard. We CAN do this stuff. If you were wringing your hands about unemployed truckers, good news! They’ve all got jobs moving thousands of cities inland!

It’s just (just!) a matter of reorienting our economy around preserving our planet and our species.

And yeah, that’s hard, too – but if “the economy” can’t be oriented to preserving our species, we need a different economy.

Period.

Rondam RamblingsMark your calendars: I am debating Ken Hoving on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

Planet DebianDirk Eddelbuettel: Rcpp 1.0.5: Several Updates

rcpp logo

Right on the heels of the news of 2000 CRAN packages using Rcpp (and also hitting 12.5 of CRAN package, or one in eight), we are happy to announce release 1.0.5 of Rcpp. Since the ten-year anniversary and the 1.0.0 release release in November 2018, we have been sticking to a four-month release cycle. The last release has, however, left us with a particularly bad taste due to some rather peculiar interactions with a very small (but ever so vocal) portion of the user base. So going forward, we will change two things. First off, we reiterate that we have already made rolling releases. Each minor snapshot of the main git branch gets a point releases. Between release 1.0.4 and this 1.0.5 release, there were in fact twelve of those. Each and every one of these was made available via the drat repo, and we will continue to do so going forward. Releases to CRAN, however, are real work. If they then end up with as much nonsense as the last release 1.0.4, we think it is appropriate to slow things down some more so we intend to now switch to a six-months cycle. As mentioned, interim releases are always just one install.packages() call with a properly set repos argument away.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2002 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 203 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steady at around one millions downloads per month.

This release features again a number of different pull requests by different contributors covering the full range of API improvements, attributes enhancements, changes to Sugar and helper functions, extended documentation as well as continuous integration deplayment. See the list below for details.

Changes in Rcpp patch release version 1.0.5 (2020-07-01)

  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protected with Rcpp::Shield (Dirk in #1059).

    • One call to abs is now properly namespaced with std:: (Uwe Korn in #1069).

    • String object memory preservation was corrected/simplified (Kevin in #1082).

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).

    • The short_file_name() helper function is safer with respect to temporaries (Kevin in #1067 fixing #1066, and #1071 fixing #1070).

  • Changes in Rcpp Sugar:

    • Two sample() objects are now standard vectors and not R_alloc created (Dirk in #1075 fixing #1074).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() adjusts for a (documented) change in R 4.0.0 (Dirk in #1088 fixing #1087).
  • Changes in Rcpp Documentation:

    • The pdf file of the earlier introduction is again typeset with bibliographic information (Dirk).

    • A new vignette describing how to package C++ libraries has been added (Dirk in #1078 fixing #1077).

  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

    • The exceptions test is now partially skipped on Solaris as it already is on Windows (Dirk in #1065).

    • The default CI runner was upgraded to R 4.0.0 (Dirk).

    • The CI matrix spans R 3.5, 3.6, r-release and r-devel (Dirk).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2455 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramThiefQuest Ransomware for the Mac

There's a new ransomware for the Mac called ThiefQuest or EvilQuest. It's hard to get infected:

For your Mac to become infected, you would need to torrent a compromised installer and then dismiss a series of warnings from Apple in order to run it. It's a good reminder to get your software from trustworthy sources, like developers whose code is "signed" by Apple to prove its legitimacy, or from Apple's App Store itself. But if you're someone who already torrents programs and is used to ignoring Apple's flags, ThiefQuest illustrates the risks of that approach.

But it's nasty:

In addition to ransomware, ThiefQuest has a whole other set of spyware capabilities that allow it to exfiltrate files from an infected computer, search the system for passwords and cryptocurrency wallet data, and run a robust keylogger to grab passwords, credit card numbers, or other financial information as a user types it in. The spyware component also lurks persistently as a backdoor on infected devices, meaning it sticks around even after a computer reboots, and could be used as a launchpad for additional, or "second stage," attacks. Given that ransomware is so rare on Macs to begin with, this one-two punch is especially noteworthy.

Planet DebianJonathan Dowland: Review: Roku Express

I don't generally write consumer reviews, here or elsewhere; but I have been so impressed by this one I wanted to mention it.

For Holly's birthday this year, taking place under Lockdown, we decided to buy a year's subscription to "Disney+". Our current TV receiver (A Humax Freesat box) doesn't support it so I needed to find some other way to get it onto the TV.

After a short bit of research, I bought the "Roku Express" streaming media player. This is the most basic streamer that Roku make, bottom of their range. For a little bit more money you can get a model which supports 4K (although my TV obviously doesn't: it, and the basic Roku, top out at 1080p) and a bit more gets you a "stick" form-factor and a Bluetooth remote (rather than line-of-sight IR).

I paid £20 for the most basic model and it Just Works. The receiver is very small but sits comfortably next to my satellite receiver-box. I don't have any issues with line-of-sight for the IR remote (and I rely on a regular IR remote for the TV itself of course). It supports Disney+, but also all the other big name services, some of which we already use (Netflix, YouTube BBC iPlayer) and some of which we didn't, since it was too awkward to access them (Google Play, Amazon Prime Video). It has now largely displaced the FreeSat box for accessing streaming content because it works so well and everything is in one place.

There's a phone App that remote-controls the box and works even better than the physical remote: it can offer a full phone-keyboard at times when you need to input text, and can mute the TV audio and put it out through headphones attached to the phone if you want.

My aging Plasma TV suffers from burn-in from static pictures. If left paused for a duration the Roku goes to a screensaver that keeps the whole frame moving. The FreeSat doesn't do this. My Blu Ray player does, but (I think) it retains some static elements.

Planet DebianReproducible Builds: Reproducible Builds in June 2020

Welcome to the June 2020 report from the Reproducible Builds project. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News

The GitHub Security Lab published a long article on the discovery of a piece of malware designed to backdoor open source projects that used the build process and its resulting artifacts to spread itself. In the course of their analysis and investigation, the GitHub team uncovered 26 open source projects that were backdoored by this malware and were actively serving malicious code. (Full article)

Carl Dong from Chaincode Labs uploaded a presentation on Bitcoin Build System Security and reproducible builds to YouTube:

The app intended to trace infection chains of Covid-19 in Switzerland published information on how to perform a reproducible build.

The Reproducible Builds project has received funding in the past from the Open Technology Fund (OTF) to reach specific technical goals, as well as to enable the project to meet in-person at our summits. The OTF has actually also assisted countless other organisations that promote transparent, civil society as well as those that provide tools to circumvent censorship and repressive surveillance. However, the OTF has now been threatened with closure. (More info)

It was noticed that Reproducible Builds was mentioned in the book End-user Computer Security by Mark Fernandes (published by WikiBooks) in the section titled Detection of malware in software.

Lastly, reproducible builds and other ideas around software supply chain were mentioned in a recent episode of the Ubuntu Podcast in a wider discussion about the Snap and application stores (at approx 16:00).


Distribution work

In the ArchLinux distribution, a goal to remove .doctrees from installed files was created via Arch’s ‘TODO list’ mechanism. These .doctree files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default.

Dimitry Andric was able to determine why the reproducibility status of FreeBSD’s base.txz depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler []. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [].

For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan “janneke” Nieuwenhuizen posted that they had further reduced their “bootstrap seed” to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian

Holger Levsen filed three bugs (#961857, #961858 & #961859) against the reproducible-check tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [][][].

Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. []

Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system’s approach this []. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds.

47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates issue [] and updated the paths_vary_due_to_usrmerge as deterministic issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path and gcc_captures_build_path issues. [][][].

Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild Debian package rebuilder tool (#961861, #961862 & #961864).

Development

On our website this month, Arnout Engelen added a link to our Mastodon account [] and moved the SOURCE_DATE_EPOCH git log example to another section []. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [].

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper compatibility level to 13 [] and adjusted a related dependency to avoid potential circular dependency [].

Upstream work

The Reproducible Builds project attempts to fix unreproducible packages and we try to to send all of our patches upstream. This month, we wrote a large number of such patches including:

Bernhard M. Wiedemann also filed reports for frr (build fails on single-processor machines), ghc-yesod-static/git-annex (a filesystem ordering issue) and ooRexx (ASLR-related issue).

diffoscope

diffoscope is our in-depth ‘diff-on-steroids’ utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless binary diffs.

This month, Chris Lamb uploaded versions 147, 148 and 149 to Debian and made the following changes:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. []
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). []
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an ‘info’ level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. []
    • Don’t mask an existing test. []
  • Codebase improvements:

    • Replace obscure references to WF with “Wagner-Fischer” for clarity. []
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of ‘missing’ files. []
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. []
    • Drop a large number of unused imports. [][][][][]
    • Make many code sections more Pythonic. [][][][]
    • Prevent some variable aliasing issues. [][][]
    • Use some tactical f-strings to tidy up code [][] and remove explicit u"unicode" strings [].
    • Refactor a large number of routines for clarity. [][][][]

trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb also corrected the location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)

In addition Jean-Romain Garnier made the following changes:

  • Fix the --new-file option when comparing directories by merging DirectoryContainer.compare and Container.compare. (#180)
  • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
  • Make child pages open in new window in the --html-dir presenter format. []
  • Improve the diffs in the --html-dir format. [][]

Lastly, Daniel Fullmer fixed the Coreboot filesystem comparator [] and Mattia Rizzolo prevented warnings from the tlsh fuzzy-matching library during tests [] and tweaked the build system to remove an unwanted .build directory []. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [] and later 148 [].

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Prevent bogus failure emails from rsync2buildinfos.debian.net every night. []
    • Merge a fix from David Bremner’s database of .buildinfo files to include a fix regarding comparing source vs. binary package versions. []
    • Only run the Debian package rebuilder job twice per day. []
    • Increase bullseye scheduling. []
  • System health status page:

    • Add a note displaying whether a node needs to be rebooted for a kernel upgrade. []
    • Fix sorting order of failed jobs. []
    • Expand footer to link to the related Jenkins job. []
    • Add archlinux_html_pages, openwrt_rebuilder_today and openwrt_rebuilder_future to ‘known broken’ jobs. []
    • Add HTML <meta> header to refresh the page every 5 minutes. []
    • Count the number of ignored jobs [], ignore permanently ‘known broken’ jobs [] and jobs on ‘known offline’ nodes [].
    • Only consider the ‘known offline’ status from Git. []
    • Various output improvements. [][]
  • Tools:

    • Switch URLs for the Grml Live Linux and PureOS package sets. [][]
    • Don’t try to build a disorderfs Debian source package. [][][]
    • Stop building diffoscope as we are moving this to Salsa. [][]
    • Merge several “is diffoscope up-to-date on every platform?” test jobs into one [] and fail less noisily if the version in Debian cannot be determined [].

In addition: Marcus Hoffmann was added as a maintainer of the F-Droid reproducible checking components [], Jelle van der Waa updated the “is diffoscope up-to-date in every platform” check for Arch Linux and diffoscope [], Mattia Rizzolo backed up a copy of a “remove script” run on the Codethink-hosted ‘jump server‘ [] and Vagrant Cascadian temporarily disabled the fixfilepath on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map categorised issue.

Lastly, the usual build node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][][][][].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramiPhone Apps Stealing Clipboard Data

iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.

While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.

This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.

EDITED TO ADD (7/6): LinkedIn and Reddit are doing this.

Worse Than FailureCodeSOD: Classic WTF: Dimensioning the Dimension

It was a holiday weekend in the US, so we're taking a little break. Yes, I know that most people took Friday off, but as this article demonstrates, dates remain hard. Original -- Remy

It's not too uncommon to see a Java programmer write a method to get the name of a month based on the month number. Sure, month name formatting is built in via SimpleDateFormat, but the documentation can often be hard to read. And since there's really no other place to find the answer, it's excusable that a programmer will just write a quick method to do this.

I have to say though, Robert Cooper's colleague came up with a very interesting way of doing this: adding an[other] index to an array ...

public class DateHelper
{
  private static final String[][] months = 
    { 
      { "0", "January" }, 
      { "1", "February" }, 
      { "2", "March" }, 
      { "3", "April" }, 
      { "4", "May" }, 
      { "5", "June" }, 
      { "6", "July" }, 
      { "7", "August" }, 
      { "8", "September" }, 
      { "9", "October" }, 
      { "10", "November" }, 
      { "11", "December" }
    };

  public static String getMonthDescription(int month)
  {
    for (int i = 0; i < months.length; i++)
    {
      if (Integer.parseInt(months[i][0]) == month)
      {
          return months[i][1];
      }
    }
    return null;
  }
}

If you enjoyed friday's post (A Pop-up Potpourii), make sure to check out the replies. There were some great error messages posted.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 09)

Here’s part nine of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet DebianEnrico Zini: COVID-19 and Capitalism

If the Reopen America protests seem a little off to you, that's because they are. In this video we're going to talk about astroturfing and how insidious it i...
Techdirt has just written about the extraordinary legal action taken against a company producing Covid-19 tests. Sadly, it's not the only example of some individuals putting profits before people. Here's a story from Italy, which is...
Berlin is trying to stop Washington from persuading a German company seeking a coronavirus vaccine to move its research to the United States.
Amazon cracked down on coronavirus price gouging. Now, while the rest of the world searches, some sellers are holding stockpiles of sanitizer and masks.
And 3D-printed valve for breathing machine sparks legal threat
Ischgl, an Austrian ski resort, has achieved tragic international fame: hundreds of tourists are believed to have contracted the coronavirus there and taken it home with them. The Tyrolean state government is now facing serious criticism. EURACTIV Germany reports.
We are seeing how the monopolistic repair and lobbying practices of medical device companies are making our response to the coronavirus pandemic harder.
Las Vegas, Nevada has come under criticism after reportedly setting up a temporary homeless shelter in a parking lot complete with social distancing barriers.

Planet DebianThorsten Alteholz: My Debian Activities in June 2020

FTP master

This month I accepted 377 packages and rejected 30. The overall number of packages that got accepted was 411.

Debian LTS

This was my seventy-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2255-1] libtasn1-6 security update for one CVE
  • [DLA 2256-1] libtirpc security update for one CVE
  • [DLA 2257-1] pngquant security update for one CVE
  • [DLA 2258-1] zziplib security update for eight CVEs
  • [DLA 2259-1] picocom security update for one CVE
  • [DLA 2260-1] mcabber security update for one CVE
  • [DLA 2261-1] php5 security update for one CVE

I started to work on curl as well but did not upload a fixed version, so this has to go to ELTS now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty fourth ELTS month.

Unfortunately in the last month of Wheezy ELTS even I did not find any package to fix a CVE, so during my small allocated time I didn’t uploaded anything.

But at least I did some days of frontdesk duties und updated my working environment for the new ELTS Jessie.

Other stuff

I uploaded a new upstream version of …

Planet DebianRussell Coker: Debian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X – the latest 64bit version of the IBM mainframe. Here’s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp

Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap

# List the support for different binary formats
update-binfmts --display

# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1

# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade

# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd

# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x

# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it’s right for you and whether you need to alter it slightly for your system.

To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I’ve included that in the explanation because I think it’s important to have all security options enabled.

The “-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0” part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the “ccw” is the difference).

I’m not sure if “noresume” on the kernel command line is required, but it doesn’t do any harm. The “net.ifnames=0” stops systemd from renaming Ethernet devices. For the virtual networking the “ccw” again is a difference from other architectures.

Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run “dhclient eth0” and other similar commands to setup networking and allow ssh logins.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the “lockdown=confidentiality” kernel security option even though it’s not supported in 4.19 kernels, it doesn’t do any harm and when I upgrade systems to newer kernels I won’t have to remember to add it.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Try It Out

I’ve got a S390X system online for a while, “ssh root@s390x.coker.com.au” with password “SELINUX” to try it out.

PPC64

I’ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:

qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"

Above is the minimal qemu command that I’m using. Below is the result, it stops after the “4.” from “4.19.0-9”. Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
Linux ppc64le
#1 SMP Debian 4.

The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package.

Any suggestions on what I should try next would be appreciated.

,

Krebs on SecurityE-Verify’s “SSN Lock” is Nothing of the Sort

One of the most-read advice columns on this site is a 2018 piece called “Plant Your Flag, Mark Your Territory,” which tried to impress upon readers the importance of creating accounts at websites like those at the Social Security Administration, the IRS and others before crooks do it for you. A key concept here is that these services only allow one account per Social Security number — which for better or worse is the de facto national identifier in the United States. But KrebsOnSecurity recently discovered that this is not the case with all federal government sites built to help you manage your identity online.

A reader who was recently the victim of unemployment insurance fraud said he was told he should create an account at the Department of Homeland Security‘s myE-Verify website, and place a lock on his Social Security number (SSN) to minimize the chances that ID thieves might abuse his identity for employment fraud in the future.

DHS’s myE-Verify homepage.

According to the website, roughly 600,000 employers at over 1.9 million hiring sites use E-Verify to confirm the employment eligibility of new employees. E-Verify’s consumer-facing portal myE-Verify lets users track and manage employment inquiries made through the E-Verify system. It also features a “Self Lock” designed to prevent the misuse of one’s SSN in E-Verify.

Enabling this lock is supposed to mean that for the next year thereafter, if an unauthorized individual attempts to fraudulently use a SSN for employment authorization, he or she cannot use the SSN in E-Verify, even if the SSN is that of an employment authorized individual. But in practice, this service may actually do little to deter ID thieves from impersonating you to a potential employer.

At the request of the reader who reached out (and in the interest of following my own advice to plant one’s flag), KrebsOnSecurity decided to sign up for a myE-Verify account. After verifying my email address, I was asked to pick a strong password and select a form of multi-factor authentication (MFA). The most secure MFA option offered (a one-time code generated by an app like Google Authenticator or Authy) was already pre-selected, so I chose that.

The site requested my name, address, SSN, date of birth and phone number. I was then asked to select five questions and answers that might be asked if I were to try to reset my password, such as “In what city/town did you meet your spouse,” and “What is the name of the company of your first paid job.” I chose long, gibberish answers that had nothing to do with the questions (yes, these password questions are next to useless for security and frequently are the cause of account takeovers, but we’ll get to that in a minute).

Password reset questions selected, the site proceeded to ask four, multiple-guess “knowledge-based authentication” questions to verify my identity. The U.S. Federal Trade Commission‘s primer page on preventing job-related ID theft says people who have placed a security freeze on their credit files with the major credit bureaus will need to lift or thaw the freeze before being able to answer these questions successfully at myE-Verify. However, I did not find that to be the case, even though my credit file has been frozen with the major bureaus for years.

After successfully answering the KBA questions (the answer to each was “none of the above,” by the way), the site declared I’d successfully created my account! I could then see that I had the option to place a “Self Lock” on my SSN within the E-Verify system.

Doing so required me to pick three more challenge questions and answers. The site didn’t explain why it was asking me to do this, but I assumed it would prompt me for the answers in the event that I later chose to unlock my SSN within E-Verify.

After selecting and answering those questions and clicking the “Lock my SSN” button, the site generated an error message saying something went wrong and it couldn’t proceed.

Alas, logging out and logging back in again showed that the site did in fact proceed and that my SSN was locked. Joy.

But I still had to know one thing: Could someone else come along pretending to be me and create another account using my SSN, date of birth and address but under a different email address? Using a different browser and Internet address, I proceeded to find out.

Imagine my surprise when I was able to create a separate account as me with just a different email address (once again, the correct answers to all of the KBA questions was “none of the above”). Upon logging in, I noticed my SSN was indeed locked within E-Verify. So I chose to unlock it.

Did the system ask any of the challenge questions it had me create previously? Nope. It just reported that my SSN was now unlocked. Logging out and logging back in to the original account I created (again under a different IP and browser) confirmed that my SSN was unlocked.

ANALYSIS

Obviously, if the E-Verify system allows multiple accounts to be created using the same name, address, phone number, SSN and date of birth, this is less than ideal and somewhat defeats the purpose of creating one for the purposes of protecting one’s identity from misuse.

Lest you think your SSN and DOB is somehow private information, you should know this static data about U.S. residents has been exposed many times over in countless data breaches, and in any case these digits are available for sale on most Americans via Dark Web sites for roughly the bitcoin equivalent of a fancy caffeinated drink at Starbucks.

Being unable to proceed through knowledge-based authentication questions without first unfreezing one’s credit file with one or all of the big three credit bureaus (Equifax, Experian and TransUnion) can actually be a plus for those of us who are paranoid about identity theft. I couldn’t find any mention on the E-Verify site of which company or service it uses to ask these questions, but the fact that the site doesn’t seem to care whether one has a freeze in place is troubling.

And when the correct answer to all of the KBA questions that do get asked is invariably “none of the above,” that somewhat lessens the value of asking them in the first place. Maybe that was just the luck of the draw in my case, but also troubling nonetheless. Either way, these KBA questions are notoriously weak security because the answers to them often are pulled from records that are public anyway, and can sometimes be deduced by studying the information available on a target’s social media profiles.

Speaking of silly questions, relying on “secret questions” or “challenge questions” as an alternative method of resetting one’s password is severely outdated and insecure. A 2015 study by Google titled “Secrets, Lies and Account Recovery” (PDF) found that secret questions generally offer a security level that is far lower than just user-chosen passwords. Also, the idea that an account protected by multi-factor authentication could be undermined by successfully guessing the answer(s) to one or more secret questions (answered truthfully and perhaps located by thieves through mining one’s social media accounts) is bothersome.

Finally, the advice given to the reader whose inquiry originally prompted me to sign up at myE-Verify doesn’t seem to have anything to do with preventing ID thieves from fraudulently claiming unemployment insurance benefits in one’s name at the state level. KrebsOnSecurity followed up with four different readers who left comments on this site about being victims of unemployment fraud recently, and none of them saw any inquiries about this in their myE-Verify accounts after creating them. Not that they should have seen signs of this activity in the E-Verify system; I just wanted to emphasize that one seems to have little to do with the other.

Planet DebianDirk Eddelbuettel: Rcpp now used by 2000 CRAN packages–and one in eight!

2000 Rcpp packages

As of yesterday, Rcpp stands at exactly 2000 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018 and then 1750 packages last August. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is available too.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We now passed 12.5 percent—so one in every eight CRAN packages dependens on Rcpp. Stunning. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – as one would expect a growth curve to.

To mark the occassion, I sent out two tweets yesterday: first a shorter one with “just the numbers”, followed by a second one also containing the few calculation steps. The screenshot from the second one is below.

2000 Rcpp packages

2000 user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did have only 1000 packages, then 5000, 10000, and here we are at just over 16000 with Rcpp at 12.5% and still growing (though maybe more slowly). Amazeballs.

The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramEncroChat Hacked by Police

French police hacked EncroChat secure phones, which are widely used by criminals:

Encrochat's phones are essentially modified Android devices, with some models using the "BQ Aquaris X2," an Android handset released in 2018 by a Spanish electronics company, according to the leaked documents. Encrochat took the base unit, installed its own encrypted messaging programs which route messages through the firm's own servers, and even physically removed the GPS, camera, and microphone functionality from the phone. Encrochat's phones also had a feature that would quickly wipe the device if the user entered a PIN, and ran two operating systems side-by-side. If a user wanted the device to appear innocuous, they booted into normal Android. If they wanted to return to their sensitive chats, they switched over to the Encrochat system. The company sold the phones on a subscription based model, costing thousands of dollars a year per device.

This allowed them and others to investigate and arrest many:

Unbeknownst to Mark, or the tens of thousands of other alleged Encrochat users, their messages weren't really secure. French authorities had penetrated the Encrochat network, leveraged that access to install a technical tool in what appears to be a mass hacking operation, and had been quietly reading the users' communications for months. Investigators then shared those messages with agencies around Europe.

Only now is the astonishing scale of the operation coming into focus: It represents one of the largest law enforcement infiltrations of a communications network predominantly used by criminals ever, with Encrochat users spreading beyond Europe to the Middle East and elsewhere. French, Dutch, and other European agencies monitored and investigated "more than a hundred million encrypted messages" sent between Encrochat users in real time, leading to arrests in the UK, Norway, Sweden, France, and the Netherlands, a team of international law enforcement agencies announced Thursday.

EncroChat learned about the hack, but didn't know who was behind it.

Going into full-on emergency mode, Encrochat sent a message to its users informing them of the ongoing attack. The company also informed its SIM provider, Dutch telecommunications firm KPN, which then blocked connections to the malicious servers, the associate claimed. Encrochat cut its own SIM service; it had an update scheduled to push to the phones, but it couldn't guarantee whether that update itself wouldn't be carrying malware too. That, and maybe KPN was working with the authorities, Encrochat's statement suggested (KPN declined to comment). Shortly after Encrochat restored SIM service, KPN removed the firewall, allowing the hackers' servers to communicate with the phones once again. Encrochat was trapped.

Encrochat decided to shut itself down entirely.

Lots of details about the hack in the article. Well worth reading in full.

The UK National Crime Agency called it Operation Venetic: "46 arrests, and £54m criminal cash, 77 firearms and over two tonnes of drugs seized so far."

Many more news articles. EncroChat website. Slashdot thread. Hacker News threads.

,

CryptogramFriday Squid Blogging: Strawberry Squid

Pretty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sam VargheseDavid Warner must pay for his sins. As everyone else does

What does one make of the argument that David Warner, who was behind the ball tampering scandal in South Africa in 2018, was guilty of less of a mistake than Ben Stokes who indulged in public fights? And the argument that since Stokes has been made England captain for the series against the West Indies, Warner, who committed what is called a lesser sin, should also be in line for the role of Australian skipper?

The suggestion has been made by Peter Lalor, a senior cricket writer at The Australian, that Warner has paid a bigger price for past mistakes than Stokes. Does that argument really hold water?

Stokes was involved in a fracas outside a nightclub in Bristol a few years back and escaped tragedy and legal issues. He got into a brawl and was lucky to get off without a prison term.

But that had no connection to the game of cricket. And when we talk of someone bringing the game into disrepute, such incidents are not in the frame.

Had Stokes indulged in such immature behaviour on the field of play or insulted spectators who were at a game, then we would have to criticise the England board for handing him the mantle of leadership.

Warner brought the game into disrepute. He hatched a plot to use sandpaper in order to get the ball to swing, then shamefully recruited the youngest player in the squad, rookie Cameron Bancroft, to carry out his plan, and then expects to be forgiven and given a chance to lead the national team.

Really? Lalor argues that the ball tampering did not hurt anyone and the umpires did not even have to change the ball. Such is the level of morality we have come to, where arguments that have little ballast are advanced because nationalistic sentiments come into the picture.

It is troubling that as senior a writer as Lalor would seek to advance such an argument, when someone has clearly violated the spirit of the game. Doubtless there will be cynics who poke fun at any suggestion that cricket is still a gentleman’s game, but without those myths that surround this pursuit, would it still have its appeal?

The short answer to that is a resounding no.

Lalor argues that Stokes’ fate would have been different had he been an Australian, I doubt that very much because given the licence extended to Australian sports stars to behave badly, his indulgences would have been overlooked. The word used to excuse him would have ” larrikinism”.

But Warner cheated. And the Australian public, no matter what their shortcomings, do not like cheats.

Unfortunately, at a pivotal moment during the cricket team’s South African tour, this senior member could only think of cheating to win. That is sad, unfortunate, and even tragic. It speaks of a big moral chasm somewhere.

But once one has done the crime, one must do the time. Arguing as Lalor does, that both Steve Smith, the captain at the time, and Bancroft got away with no leadership bans, does not carry any weight.

The man who planned the crime was nailed with the heaviest punishment. And it is doubtful whether anyone who has a sense of justice would argue against that.

Worse Than FailureError'd: Take a Risk on NaN

"Sure, I know how long the free Standard Shipping will take, but maybe, just maybe, if I choose Economy, my package will have already arrived! Or never," Philip G. writes.

 

"To be honest, I would love to hear how a course on guitar will help me become certified on AWS!" Kevin wrote.

 

Gergő writes, "Hooray! I'm going to be so productive for the next 0 days!"

 

"I guess that inbox count is what I get for using Yahoo mail?" writes Becky R.

 

Marc W. wrote, "Try all you want, PDF Creator, but you'll never sweet talk me with your 'great' offer!"

 

Mark W. wrote, "My neighborhood has a personality split, but at least they're both Pleasant."

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

CryptogramThe Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that's a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that's all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains -- not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer -- maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They're caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don't simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn't going to supply any of these things, least of all in a strategic capacity that will result in resilience. What's necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants -- now disease factories -- rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn't anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It's also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren't competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we're buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it's not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That's no longer sustainable. We need inefficiency -- the right kind in the right way -- to ensure our security. No, it's not free. But it's worth the cost.

This essay previously appeared in Quartz.

Worse Than FailureABCD

As is fairly typical in our industry, Sebastian found himself working as a sub-contractor to a sub-contractor to a contractor to a big company. In this case, it was IniDrug, a pharmaceutical company.

Sebastian was building software that would be used at various steps in the process of manufacturing, which meant he needed to spend a fair bit of time in clean rooms, and on air-gapped networks, to prevent trade secrets from leaking out.

Like a lot of large companies, they had very formal document standards. Every document going out needed to have the company logo on it, somewhere. This meant all of the regular employees had the IniDrug logo in their email signatures, e.g.:

Bill Lumbergh
Senior Project Lead
  _____       _ _____                   
 |_   _|     (_|  __ \                  
   | |  _ __  _| |  | |_ __ _   _  __ _ 
   | | | '_ \| | |  | | '__| | | |/ _` |
  _| |_| | | | | |__| | |  | |_| | (_| |
 |_____|_| |_|_|_____/|_|   \__,_|\__, |
                                   __/ |
                                  |___/ 

At least, they did until Sebastian got an out of hours, emergency call. While they absolutely were not set up for remote work, Sebastian could get webmail access. And in the webmail client, he saw:

Bill Lumbergh
Senior Project Lead
ABCD

At first, Sebastian assumed Bill had screwed up his sigline. Or maybe the attachment broke? But as Sebastian hopped on an email chain, he noticed a lot of ABCDs. Then someone sent out a Word doc (because why wouldn’t you catalog your emergency response in a Word document?), and in the space where it usually had the IniDrug logo, it instead had “ABCD”.

The crisis resolved itself without any actual effort from Sebastian or his fellow contractors, but they had to reply to a few emails just to show that they were “pigs and not chickens”- they were committed to quality software. The next day, Sebastian mentioned the ABCD weirdness.

“I saw that too. I wonder what the deal was?” his co-worker Joanna said.

They pulled up the same document on his work computer, the logo displayed correctly. He clicked on it, and saw the insertion point blinking back at him. Then he glanced at the formatting toolbar and saw “IniDrug Logo” as the active font.

Puzzled, he selected the logo and changed the font. “ABCD” appeared.

IniDrug had a custom font made, hacked so that if you typed ABCD the resulting output would look like the IniDrug logo. That was great, if you were using a computer with the font installed, or if you remembered to make sure your word processor was embedding all your weird custom fonts.

Which also meant a bunch of outside folks were interacting with IniDrug employees, wondering why on Earth they all had “ABCD” in their siglines. Sebastian and Joanna got a big laugh about it, and shared the joke with their fellow contractors. Helping the new contractors discover this became a rite of passage. When contractors left for other contracts, they’d tell their peers, “It was great working at ABCD, but it’s time that I moved on.”

There were a lot of contractors, all chuckling about this, and one day in a shared break room, a bunch of T-Shirts appeared: plain white shirts with “ABCD” written on them in Arial.

That, as it turned out, was the bridge too far, and it got the attention of someone who was a regular IniDrug employee.

To the Contracting Team:
In the interests of maintaining a professional environment, we will be updating the company dress code. Shirts decorated with the text “ABCD” are prohibited, and should not be worn to work. If you do so, you will be asked to change or conceal the offending content.

Bill Lumbergh
Senior Project Lead
ABCD

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Krebs on SecurityRansomware Gangs Don’t Need PR Help

We’ve seen an ugly trend recently of tech news stories and cybersecurity firms trumpeting claims of ransomware attacks on companies large and small, apparently based on little more than the say-so of the ransomware gangs themselves. Such coverage is potentially quite harmful and plays deftly into the hands of organized crime.

Often the rationale behind couching these events as newsworthy is that the attacks involve publicly traded companies or recognizable brands, and that investors and the public have a right to know. But absent any additional information from the victim company or their partners who may be affected by the attack, these kinds of stories and blog posts look a great deal like ambulance chasing and sensationalism.

Currently, more than a dozen ransomware crime gangs have erected their own blogs to publish sensitive data from victims. A few of these blogs routinely issue self-serving press releases, some of which gallingly refer to victims as “clients” and cast themselves in a beneficent light. Usually, the blog posts that appear on ransom sites are little more than a teaser — screenshots of claimed access to computers, or a handful of documents that expose proprietary or financial information.

The goal behind the publication of these teasers is clear, and the ransomware gangs make no bones about it: To publicly pressure the victim company into paying up. Those that refuse to be extorted are told to expect that huge amounts of sensitive company data will be published online or sold on the dark web (or both).

Emboldened by their successes, several ransomware gangs recently have started demanding two ransoms: One payment to secure a digital key that can unlock files, folders and directories encrypted by their malware, and a second to avoid having any stolen information published or shared with others.

KrebsOnSecurity has sought to highlight ransomware incidents at companies whose core business involves providing technical services to others — particularly managed service providers that have done an exceptionally poor job communicating about the attack with their customers.

Overall, I’ve tried to use each story to call attention to key failures that frequently give rise to ransomware infections, and to offer information about how other companies can avoid a similar fate.

But simply parroting what professional extortionists have posted on their blog about victims of cybercrime smacks of providing aid and comfort to an enemy that needs and deserves neither.

Maybe you disagree, dear readers? Feel free to sound off in the comments below.

,

Rondam RamblingsI Will Remember Ricky Ray Rector

I've always been very proud of the fact that I came out in support of gay marriage before it was cool.   I have been correspondingly chagrined at my failure to speak out sooner and more vociferously about the shameful and systemic mistreatment of people of color, and black people in particular, in the U.S.  For what it's worth, I hereby confess my sins, acknowledge my white privilege, and

CryptogramSecuring the International IoT Supply Chain

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that's selling the stuff to consumers. There's a lot of detail between here and there, though, and it's all in the paper.

We also wrote a Lawfare post:

...we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Worse Than FailureCodeSOD: locurlicenseucesss

The past few weeks, I’ve been writing software for a recording device. This is good, because when I’m frustrated by the bugs I put in the code and I start cursing at it, it’s not venting, it’s testing.

There are all sorts of other little things we can do to vent. Imagine, if you will, you find yourself writing an if with an empty body, but an else clause that does work. You’d probably be upset at yourself. You might be stunned. You might be so tired it feels like a good idea at the time. You might be deep in the throes of “just. work. goddammit”. Regardless of the source of that strain, you need to let it out somewhere.

Emmanuelle found this is a legacy PHP codebase:

if(mynum($Q)){
    // Congratulations, you has locurlicenseucesss asdfghjk
} else {
    header("Location: feed.php");
}

I think being diagnosed with locurlicenseucesss should not be a cause for congratulations, but maybe I’m the one that’s confused.

Emmanuelle adds: “Honestly, I have no idea how this happened.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramAndroid Apps Stealing Facebook Credentials

Google has removed 25 Android apps from its store because they steal Facebook credentials:

Before being taken down, the 25 apps were collectively downloaded more than 2.34 million times.

The malicious apps were developed by the same threat group and despite offering different features, under the hood, all the apps worked the same.

According to a report from French cyber-security firm Evina shared with ZDNet today, the apps posed as step counters, image editors, video editors, wallpaper apps, flashlight applications, file managers, and mobile games.

The apps offered a legitimate functionality, but they also contained malicious code. Evina researchers say the apps contained code that detected what app a user recently opened and had in the phone's foreground.


Krebs on SecurityCOVID-19 ‘Breach Bubble’ Waiting to Pop?

The COVID-19 pandemic has made it harder for banks to trace the source of payment card data stolen from smaller, hacked online merchants. On the plus side, months of quarantine have massively decreased demand for account information that thieves buy and use to create physical counterfeit credit cards. But fraud experts say recent developments suggest both trends are about to change — and likely for the worse.

The economic laws of supply and demand hold just as true in the business world as they do in the cybercrime space. Global lockdowns from COVID-19 have resulted in far fewer fraudsters willing or able to visit retail stores to use their counterfeit cards, and the decreased demand has severely depressed prices in the underground for purloined card data.

An ad for a site selling stolen payment card data, circa March 2020.

That’s according to Gemini Advisory, a New York-based cyber intelligence firm that closely tracks the inventories of dark web stores trafficking in stolen payment card data.

Stas Alforov, Gemini’s director of research and development, said that since the beginning of 2020 the company has seen a steep drop in demand for compromised “card present” data — digits stolen from hacked brick-and-mortar merchants with the help of malicious software surreptitiously installed on point-of-sale (POS) devices.

Alforov said the median price for card-present data has dropped precipitously over the past few months.

“Gemini Advisory has seen over 50 percent decrease in demand for compromised card present data since the mandated COVID-19 quarantines in the United States as well as the majority of the world,” he told KrebsOnSecurity.

Meanwhile, the supply of card-present data has remained relatively steady. Gemini’s latest find — a 10-month-long card breach at dozens of Chicken Express locations throughout Texas and other southern states that the fast-food chain first publicly acknowledged today after being contacted by this author — saw an estimated 165,000 cards stolen from eatery locations recently go on sale at one of the dark web’s largest cybercrime bazaars.

“Card present data supply hasn’t wavered much during the COVID-19 period,” Alforov said. “This is likely due to the fact that most of the sold data is still coming from breaches that occurred in 2019 and early 2020.”

A lack of demand for and steady supply of stolen card-present data in the underground has severely depressed prices since the beginning of the COVID-19 pandemic. Image: Gemini Advisory

Naturally, crooks who ply their trade in credit card thievery also have been working from home more throughout the COVID-19 pandemic. That means demand for stolen “card-not-present” data — customer payment information extracted from hacked online merchants and typically used to defraud other e-commerce vendors — remains high. And so have prices for card-not-present data: Gemini found prices for this commodity actually increased slightly over the past few months.

Andrew Barratt is an investigator with Coalfire, the cyber forensics firm hired by Chicken Express to remediate the breach and help the company improve security going forward. Barratt said there’s another curious COVID-19 dynamic going on with e-commerce fraud recently that is making it more difficult for banks and card issuers to trace patterns in stolen card-not-present data back to hacked web merchants — particularly smaller e-commerce shops.

“One of the concerns that has been expressed to me is that we’re getting [fewer] overlapping hotspots,” Barratt said. “For a lot of the smaller, more frequently compromised merchants there has been a large drop off in transactions. Whilst big e-commerce has generally done okay during the COVID-19 pandemic, a number of more modest sized or specialty online retailers have not had the same access to their supply chain and so have had to close or drastically reduce the lines they’re selling.”

Banks routinely take groups of customer cards that have experienced fraudulent activity and try to see if some or all of them were used at the same merchant during a similar timeframe, a basic anti-fraud process known as “common point of purchase” or CPP analysis. But ironically, this analysis can become more challenging when there are fewer overall transactions going through a compromised merchant’s site, Barratt said.

“With a smaller transactional footprint means less Common Point of Purchase alerts and less data to work on to trigger a forensic investigation or fraud alert,” Barratt said. “It does also mean less fraud right now – which is a positive. But one of the big concerns that has been raised to us as investigators — literally asking if we have capacity for what’s coming — has been that merchants are getting compromised by ‘lie in wait’ type intruders.”

Barratt says there’s a suspicion that hackers may have established beachheads [breachheads?] in a number of these smaller online merchants and are simply biding their time. If and when transaction volumes for these merchants do pick up, the concern is then hackers may be in a better position to mix the sale of cards stolen from many hacked merchants and further confound CPP analysis efforts.

“These intruders may have a beachhead in a number of small and/or middle market e-commerce entities and they’re just waiting for the transaction volumes to go back up again and they’ve suddenly got the capability to have skimmers capturing lots of card data in the event of a sudden uptick in consumer spending,” he said. “They’d also have a diverse portfolio of compromise so could possibly even evade common point of purchase detection for a while too. Couple all of that with major shopping cart platforms going out of support (like Magento 1 this month) and furloughed IT and security staff, and there’s a potentially large COVID-19 breach bubble waiting to pop.”

With a majority of payment cards issued in the United States now equipped with a chip that makes the cards difficult and expensive for thieves to clone, cybercriminals have continued to focus on hacking smaller merchants that have not yet installed chip card readers and are still swiping the cards’ magnetic stripe at the register.

Barratt said his company has tied the source of the breach to malware known as “PwnPOS,” an ancient strain of point-of-sale malware that first surfaced more than seven years ago, if not earlier.

Chicken Express CEO Ricky Stuart told KrebsOnSecurity that apart from “a handful” of locations his family owns directly, most of his 250 stores are franchisees that decide on their own how to secure their payment operations. Nevertheless, the company is now forced to examine each store’s POS systems to remediate the breach.

Stuart blamed the major point-of-sale vendors for taking their time in supporting and validating chip-capable payment systems. But when asked how many of the company’s 250 stores had chip-capable readers installed, Stuart said he didn’t know. Ditto for the handful of stores he owns directly.

“I don’t know how many,” he said. “I would think it would be a majority. If not, I know they’re coming.”

Worse Than FailureCodeSOD: The Data Class

There has been a glut of date-related code in the inbox lately, so it’s always a treat where TRWTF isn’t how they fail to handle dates, and instead, something else. For example, imagine you’re browsing a PHP codebase and see something like:

$fmtedDate = data::now();

You’d instantly know that something was up, just by seeing a class named data. That’s what got Vania’s attention. She dug in, and found a few things.

First, clearly, data is a terrible name for a class. It’d be a terrible name if it was a data access layer, but it has a method now, which tells us that it’s not just handling data.

But it’s not handling data at all. data is more precisely a utility class- the dumping ground for methods that the developer couldn’t come up with a way to organize. It contains 58 methods, 38 of which are 100% static methods, 7 of which should have been declared static but weren’t, and the remainder are actually interacting with $this. All in all, this class must be incredibly “fun”.

Let’s look at the now implementation:

class data
{
    // ...

    public static function now()
    {
        return date('Y', time())."-".date('m', time())."-".date('d')." ".date('H').":".	date('i').":".	date('s');
    }
}

Finally, we get to your traditional bad date handling code. Instead of just using a date format string to get the desired output, we manually construct the string by invoking date a bunch of times. There are some “interesting” choices here- you’ll note that the PHP date function accepts a date parameter- so you can format an arbitrary date- and sometimes they pass in the result of calling time() and sometimes they don’t. This is mostly not a problem, since date will invoke time itself if you don’t hand it one, so that’s just unnecessary.

But Vania adds some detail:

Because of the multiple calls to time() this code contains a subtle race condition. If it is called at, say, 2019-12-31 23:59:59.999, the date('Y', time()) part will evaluate to “2019”. If the time now ticks over to 2020-01-01 00:00:00.000, the next date() call will return a month value of “01” (and so on for the rest of the expression). The result is a timestamp of “2019–01–01 00:00:00”, which is off by a year. A similar issue happens at the end of every month, day, hour, and minute; i.e. every minute there is an opportunity for the result to be off by a minute.

It’s easy to fix, of course, you could just: return date('Y-m-d H:i:s');, which does exactly the same thing, but correctly. Unfortunately, Vania has this to add:

Unfortunately there is no budget for making this kind of change to the application. Also, its original authors seem to have been a fan of “code reuse” by copy/paste: There are four separate instances of this now() function in the codebase, all containing exactly the same code.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 08)

Here’s part eight of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureAnother Immovable Spreadsheet

OrderStatistics.gif

Steve had been working as a web developer, but his background was in mathematics. Therefore, when a job opened up for internal transfer to the "Statistics" team, he jumped on it and was given the job without too much contest. Once there, he was able to meet the other "statisticians:" a group of well-meaning businessfolk with very little mathematical background who used The Spreadsheet to get their work done.

The Spreadsheet was Excel, of course. To enter data, you had to cut and paste columns from various tools into one of the many sheets, letting the complex array of formulas calculate the numbers needed for the quarterly report. Shortly before Steve's transfer, there had apparently been a push to automate some of the processes with SAS, a tool much more suited to this sort of work than a behemoth of an Excel spreadsheet.

A colleague named Stu showed Steve the ropes. Stu admitted there was indeed a SAS process that claimed to do the same functions as The Spreadsheet, but nobody was using it because nobody trusted the numbers that came out of it.

Never the biggest fan of Excel, Steve decided to throw his weight behind the SAS process. He ran the SAS algorithms multiple times, giving the outputs to Stu to compare against the Excel spreadsheet output. The first three iterations, everything seemed to match exactly. On the fourth, however, Stu told him that one of the outputs was off by 0.2.

To some, this was vindication of The Spreadsheet; after all, why would they need some fancy-schmancy SAS process when Excel worked just fine? Steve wasn't so sure. An error in the code might lead to a big discrepancy, but this sounded more like a rounding error than anything else.

Steve tracked down the relevant documentation for Excel and SAS, and found that both used 64-bit floating point numbers on the 32-bit Windows machines that the calculations were run on. Given that all the calculations were addition and multiplication with no exponents, the mistake had to be in either the Excel code or the SAS code.

Steve stepped through the SAS process, ensuring that the intermediate outputs in SAS matched the accompanying cells in the Excel sheet. When he'd just about given up hope, he found the issue: a ROUND command, right at the end of the chain where it didn't belong.

All of the SAS code in the building had been written by a guy named Brian. Even after Steve had taken over writing SAS, people still sought out Brian for updates and queries, despite his having other work to do.

Steve had no choice but to do the same. He stopped by Brian's cube, knocking perfunctorily before asking, "Why is there a ROUND command at the end of the SAS?"

"There isn't. What?" replied Brian, clearly startled out of his thinking trance.

"No, look, there is," replied Steve, waving a printout. "Why is it there?"

"Oh. That." Brian shrugged. "Excel was displaying only one decimal place for some godforsaken reason, and they wanted the SAS output to be exactly the same."

"I should've known," said Steve, disgustedly. "Stu told me it matched, but it can't have been matching exactly this whole time, not with rounding in there."

"Sure, man. Whatever."

Sadly, Steve was transferred again before the next quarterly run—this time to a company doing proper statistical analysis, not just calculating a few figures for the quarterly presentation. He instructed Stu how to check to fifteen decimal places, but didn't hold out much hope that SAS would come to replace the Excel sheet.

Steve later ran into Stu at a coffee hour. He asked about how the replacement was going.

"I haven't had time to check the figures from SAS," Stu replied. "I'm too busy with The Spreadsheet as-is."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramCOVID-19 Risks of Flying

I fly a lot. Over the past five years, my average speed has been 32 miles an hour. That all changed mid-March. It's been 105 days since I've been on an airplane -- longer than any other time in my adult life -- and I have no future flights scheduled. This is all a prelude to saying that I have been paying a lot of attention to the COVID-related risks of flying.

We know a lot more about how COVID-19 spreads than we did in March. The "less than six feet, more than ten minutes" model has given way to a much more sophisticated model involving airflow, the level of virus in the room, and the viral load in the person who might be infected.

Regarding airplanes specifically: on the whole, they seem safer than many other group activities. Of all the research about contact tracing results I have read, I have seen no stories of a sick person on an airplane infecting other passengers. There are no superspreader events involving airplanes. (That did happen with SARS.) It seems that the airflow inside the cabin really helps.

Airlines are trying to make things better: blocking middle seats, serving less food and drink, trying to get people to wear masks. (This video is worth watching.) I've started to see airlines requiring masks and banning those who won't, and not just strongly encouraging them. (If mask wearing is treated the same as the seat belt wearing, it will make a huge difference.) Finally, there are a lot of dumb things that airlines are doing.

This article interviewed 511 epidemiologists, and the general consensus was that flying is riskier than getting a haircut but less risky than eating in a restaurant. I think that most of the risk is pre-flight, in the airport: crowds at the security checkpoints, gates, and so on. And that those are manageable with mask wearing and situational awareness. So while I am not flying yet, I might be willing to soon. (It doesn't help that I get a -1 on my COVID saving throw for type A blood, and another -1 for male pattern baldness. On the other hand, I think I get a +3 Constitution bonus. Maybe, instead of sky marshals we can have high-level clerics on the planes.)

And everyone: wear a mask, and wash your hands.

EDITED TO ADD (6/27): Airlines are starting to crowd their flights again.

,

Krebs on SecurityRussian Cybercrime Boss Burkov Gets 9 Years

A well-connected Russian hacker once described as “an asset of supreme importance” to Moscow was sentenced on Friday to nine years in a U.S. prison after pleading guilty to running a site that sold stolen payment card data, and to administering a highly secretive crime forum that counted among its members some of the most elite Russian cybercrooks.

Alexei Burkov, seated second from right, attends a hearing in Jerusalem in 2015. Photo: Andrei Shirokov / Tass via Getty Images.

Aleksei Burkov of St. Petersburg, Russia admitted to running CardPlanet, a site that sold more than 150,000 stolen credit card accounts, and to being a founder of DirectConnection — a closely guarded underground community that attracted some of the world’s most-wanted Russian hackers.

As KrebsOnSecurity noted in a November 2019 profile of Burkov’s hacker nickname ‘k0pa,’ “a deep dive into the various pseudonyms allegedly used by Burkov suggests this individual may be one of the most connected and skilled malicious hackers ever apprehended by U.S. authorities, and that the Russian government is probably concerned that he simply knows too much.”

Burkov was arrested in 2015 on an international warrant while visiting Israel, and over the ensuing four years the Russian government aggressively sought to keep him from being extradited to the United States.

When Israeli authorities turned down requests to send him back to Russia — supposedly to face separate hacking charges there — the Russians then imprisoned Israeli citizen Naama Issachar on trumped-up drug charges in a bid to trade prisoners. Nevertheless, Burkov was extradited to the United States in November 2019. Russian President Vladimir Putin pardoned Issachar in January 2020, just hours after Burkov pleaded guilty.

Arkady Bukh is a New York attorney who has represented a number of accused and convicted cybercriminals from Eastern Europe and Russia. Bukh said he suspects Burkov did not cooperate with Justice Department investigators apart from agreeing not to take the case to trial.

“Nine years is a huge sentence, and the government doesn’t give nine years to defendants who cooperate,” Bukh said. “Also, the time span [between Burkov’s guilty plea and sentencing] was very short.”

DirectConnection was something of a Who’s Who of major cybercriminals, and many of its most well-known members have likewise been extradited to and prosecuted by the United States. Those include Sergey “Fly” Vovnenko, who was sentenced to 41 months in prison for operating a botnet and stealing login and payment card data. Vovnenko also served as administrator of his own cybercrime forum, which he used in 2013 to carry out a plan to have Yours Truly framed for heroin possession.

As noted in last year’s profile of Burkov, an early and important member of DirectConnection was a hacker who went by the moniker “aqua” and ran the banking sub-forum on Burkov’s site. In December 2019, the FBI offered a $5 million bounty leading to the arrest and conviction of aqua, who’s been identified as Maksim Viktorovich Yakubets. The Justice Department says Yakubets/aqua ran a transnational cybercrime organization called “Evil Corp.” that stole roughly $100 million from victims.

In this 2011 screenshot of DirectConnection, we can see the nickname of “aqua,” who ran the “banking” sub-forum on DirectConecttion. Aqua, a.k.a. Maksim V. Yakubets of Russia, now has a $5 million bounty on his head from the FBI.

According to a statement of facts in Burkov’s case, the author of the infamous SpyEye banking trojan — Aleksandr “Gribodemon” Panin— was personally vouched for by Burkov. Panin was sentenced in 2016 to more than nine years in prison.

Other top DirectConnection members include convicted credit card fraudsters Vladislav “Badb” Horohorin and Sergey “zo0mer” Kozerev, as well as the infamous spammer and botnet master Peter “Severa” Levashov.

Also on Friday, the Justice Department said it obtained a guilty plea from another top cybercrime forum boss — Sergey “Stells” Medvedev — who admitted to administering the Infraud forum. The government says Infraud, whose slogan was “In Fraud We Trust,” attracted more than 10,000 members and inflicted more than $568 million in actual losses from the sale of stolen identity information, payment card data and malware.

A copy of the 108-month judgment entered against Burkov is available here (PDF).

MELinks June 2020

Bruce Schneier wrote an informative post about Zoom security problems [1]. He recommends Jitsi which has a Debian package of their software and it’s free software.

Axel Beckert wrote an interesting post about keyboards with small numbers of keys, as few as 28 [2]. It’s not something I’d ever want to use, but interesting to read from a computer science and design perspective.

The Guardian has a disturbing article explaining why we might never get a good Covid19 vaccine [3]. If that happens it will change our society for years if not decades to come.

Matt Palmer wrote an informative blog post about private key redaction [4]. I learned a lot from that. Probably the simplest summary is that you should never publish sensitive data unless you are certain that all that you are publishing is suitable, if you don’t understand it then you don’t know if it’s suitable to be published!

This article by Umair Haque on eand.co has some interesting points about how Freedom is interpreted in the US [5].

This article by Umair Haque on eand.co has some good points about how messed up the US is economically [6]. I think that his analysis is seriously let down by omitting the savings that could be made by amending the US healthcare system without serious changes (EG by controlling drug prices) and by reducing the scale of the US military (there will never be another war like WW2 because any large scale war will be nuclear). If the US government could significantly cut spending in a couple of major areas they could then put the money towards fixing some of the structural problems and bootstrapping a first-world economic system.

The American Conservatrive has an insightful article “Seven Reasons Police Brutality is Systemic Not Anecdotal [7].

Scientific American has an informative article about how genetic engineering could be used to make a Covid-19 vaccine [8].

Rike wrote an insightful post about How Language Changes Our Concepts [9]. They cover the differences between the French, German, and English languages based on gender and on how the language limits thoughts. Then conclude with the need to remove terms like master/slave and blacklist/whitelist from our software, with a focus on Debian but it’s applicable to all software.

Gunnar Wolf also wrote an insightful post On Masters and Slaves, Whitelists and Blacklists [10], they started with why some people might not understand the importance of the issue and then explained some ways of addressing it. The list of suggested terms includes Primary-secondary, Leader-follower, and some other terms which have slightly different meanings and allow more precision in describing the computer science concepts used. We can be more precise when describing computer science while also not using terms that marginalise some groups of people, it’s a win-win!

Both Rike and Gunnar were responding to a LWN article about the plans to move away from Master/Slave and Blacklist/Whitelist in the Linux kernel [11]. One of the noteworthy points in the LWN article is that there are about 70,000 instances of words that need to be changed in the Linux kernel so this isn’t going to happen immediately. But it will happen eventually which is a good thing.

Planet Linux AustraliaDonna Benjamin: Vale Marcus de Rijk

Vale Marcus de Rijk kattekrab Sat, 27/06/2020 - 10:16

,

CryptogramFriday Squid Blogging: Fishing for Jumbo Squid

Interesting article on the rise of the jumbo squid industry as a result of climate change.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramThe Unintended Harms of Cybersecurity

Interesting research: "Identifying Unintended Harms of Cybersecurity Countermeasures":

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Worse Than FailureError'd: The Exception

Alex A. wrote, "Vivaldi only has two words for you when you forget to switch back to your everyday browser for email after testing a website in Edge."

 

"So was my profile successfully created wrongly, wrongly created successfully?" writes Francesco A.

 

"I will personally wait for next year's show at undefined, undefined, given the pandemic and everything," writes Drew W.

 

Bill T. wrote, "I'm not sure if Adobe is confused about me being me, or if they think that I could be schizophrenic."

 

"FedEx? There's something a little bit off about your website for me. Are you guys okay in there?" wrote Martin P.

 

"Winn-Dixie's 'price per each' algorithm isn't exactly wrong but it definitely isn't write either," Mark S. writes.

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityNew Charges, Sentencing in Satori IoT Botnet Conspiracy

The U.S. Justice Department today charged a Canadian and a Northern Ireland man for allegedly conspiring to build botnets that enslaved hundreds of thousands of routers and other Internet of Things (IoT) devices for use in large-scale distributed denial-of-service (DDoS) attacks. In addition, a defendant in the United States was sentenced today to drug treatment and 18 months community confinement for his admitted role in the botnet conspiracy.

Indictments unsealed by a federal court in Alaska today allege 20-year-old Aaron Sterritt from Larne, Northern Ireland, and 21-year-old Logan Shwydiuk of Saskatoon, Canada conspired to build, operate and improve their IoT crime machines over several years.

Prosecutors say Sterritt, using the hacker aliases “Vamp” and “Viktor,” was the brains behind the computer code that powered several potent and increasingly complex IoT botnet strains that became known by exotic names such as “Masuta,” “Satori,” “Okiru” and “Fbot.”

Shwydiuk, a.k.a. “Drake,” “Dingle, and “Chickenmelon,” is alleged to have taken the lead in managing sales and customer support for people who leased access to the IoT botnets to conduct their own DDoS attacks.

A third member of the botnet conspiracy — 22-year-old Kenneth Currin Schuchman of Vancouver, Wash. — pleaded guilty in Sept. 2019 to aiding and abetting computer intrusions in September 2019. Schuchman, whose role was to acquire software exploits that could be used to infect new IoT devices, was sentenced today by a judge in Alaska to 18 months of community confinement and drug treatment, followed by three years of supervised release.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

The government says the defendants built and maintained their IoT botnets by constantly scanning the Web for insecure devices. That scanning primarily targeted devices that were placed online with weak, factory default settings and/or passwords. But the group also seized upon a series of newly-discovered security vulnerabilities in these IoT systems — commandeering devices that hadn’t yet been updated with the latest software patches.

Some of the IoT botnets enslaved hundreds of thousands of hacked devices. For example, by November 2017, Masuta had infected an estimated 700,000 systems, allegedly allowing the defendants to launch crippling DDoS attacks capable of hurling 100 gigabits of junk data per second at targets — enough firepower to take down many large websites.

In 2015, then 15-year-old Sterritt was involved in the high-profile hack against U.K. telecommunications provider TalkTalk. Sterritt later pleaded guilty to his part in the intrusion, and at his sentencing in 2018 was ordered to complete 50 hours of community service.

The indictments against Sterritt and Shwydiuk (PDF) do not mention specific DDoS attacks thought to have been carried out with the IoT botnets. In an interview today with KrebsOnSecurity, prosecutors in Alaska declined to discuss any of their alleged offenses beyond building, maintaining and selling the above-mentioned IoT botnets.

But multiple sources tell KrebsOnSecuirty Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

In 2018, authorities with the U.K.’s National Crime Agency (NCA) interviewed a suspect in connection with the Dyn attack, but ultimately filed no charges against the youth because all of his digital devices had been encrypted.

“The principal suspect of this investigation is a UK national resident in Northern Ireland,” reads a June 2018 NCA brief on their investigation into the Dyn attack (PDF), dubbed Operation Midmonth. “In 2018 the subject returned for interview, however there was insufficient evidence against him to provide a realistic prospect of conviction.”

The login prompt for Nexus Zeta’s IoT botnet included the message “Masuta is powered and hosted on Brian Kreb’s [sic] 4head.” To be precise, it’s a 5head.

The unsealing of the indictments against Sterritt and Shwydiuk came just minutes after Schuchman was sentenced today. Schuchman has been confined to an Alaskan jail for the past 13 months, and Chief U.S. District Judge Timothy Burgess today ordered the sentence of 18 months community confinement to begin Aug. 1.

Community confinement in Schuchman’s case means he will spend most or all of that time in a drug treatment program. In a memo (PDF) released prior to Schuchman’s sentencing today, prosecutors detailed the defendant’s ongoing struggle with narcotics, noting that on multiple occasions he was discharged from treatment programs after testing positive for Suboxone — which is used to treat opiate addiction and is sometimes abused by addicts — and for possessing drug contraband.

The government’s sentencing memo also says Schuchman on multiple occasions absconded from pretrial supervision, and went right back to committing the botnet crimes for which he’d been arrested — even communicating with Sterritt about the details of the ongoing FBI investigation.

“Defendant’s performance on pretrial supervision has been spectacularly poor,” prosecutors explained. “Even after being interviewed by the FBI and put on restrictions, he continued to create and operate a DDoS botnet.”

Prosecutors told the judge that when he was ultimately re-arrested by U.S. Marshals, Schuchman was found at a computer in violation of the terms of his release. In that incident, Schuchman allegedly told his dad to trash his computer, before successfully encrypting his hard drive (which the Marshals service is still trying to decrypt). According to the memo, the defendant admitted to marshals that he had received and viewed videos of “juveniles engaged in sex acts with other juveniles.”

“The circumstances surrounding the defendant’s most recent re-arrest are troubling,” the memo recounts. “The management staff at the defendant’s father’s apartment complex, where the defendant was residing while on abscond status, reported numerous complaints against the defendant, including invitations to underage children to swim naked in the pool.”

Adam Alexander, assistant US attorney for the district of Alaska, declined to say whether the DOJ would seek extradition of Sterritt and Shwydiuk. Alexander said the success of these prosecutions is highly dependent on the assistance of domestic and international law enforcement partners, as well as a list of private and public entities named at the conclusion of the DOJ’s press release on the Schuchman sentencing (PDF).

However, a DOJ motion (PDF) to seal the case records filed back in September 2019 says the government is in fact seeking to extradite the defendants.

Chief Judge Burgess was the same magistrate who presided over the 2018 sentencing of the co-authors of Mirai, a highly disruptive IoT botnet strain whose source code was leaked online in 2016 and was built upon by the defendants in this case. Both Mirai co-authors were sentenced to community service and home confinement thanks to their considerable cooperation with the government’s ongoing IoT botnet investigations.

Asked whether he was satisfied with the sentence handed down against Schuchman, Alexander maintained it was more than just another slap on the wrist, noting that Schuchman has waived his right to appeal the conviction and faces additional confinement of two years if he absconds again or fails to complete his treatment.

“In every case the statutory factors have to do with the history of the defendants, who in these crimes tend to be extremely youthful offenders,” Alexander said. “In this case, we had a young man who struggles with mental health and really pronounced substance abuse issues. Contrary to what many people might think, the goal of the DOJ in cases like this is not to put people in jail for as long as possible but to try to achieve the best balance of safeguarding communities and affording the defendant the best possible chance of rehabilitation.”

William Walton, supervisory special agent for the FBI’s cybercrime investigation division in Anchorage, Ala., said he hopes today’s indictments and sentencing send a clear message to what he described as a relatively insular and small group of individuals who are still building, running and leasing IoT-based botnets to further a range of cybercrimes.

“One of the things we hope in our efforts here and in our partnerships with our international partners is when we identify these people, we want very much to hold them to account in a just but appropriate way,” Walton said. “Hopefully, any associates who are aspiring to fill the vacuum once we take some players off the board realize that there are going to be real consequences for doing that.”

Sociological ImagesThe Hidden Cost of Your New Wardrobe

More than ever, people desire to buy more clothes. Pushed by advertising, we look for novelty and variety, amassing personal wardrobes unimaginable in times past. With the advent of fast fashion, brands like H&M and ZARA are able to provide low prices by cutting costs where consumers can’t see—textile workers are maltreated, and our environment suffers.

In this film, I wanted to play with the idea of fast fashion advertising and turn it on its head. I edited two different H&M look books to show what really goes into the garments they advertise and the fast fashion industry as a whole. I made this film for my Introduction to Sociology class after being inspired by a reading we did earlier in the semester.

Robert Reich’s (2007) book, Supercapitalism, discusses how we have “two minds,” one as a consumer/investor and another as a citizen. He explains that as a consumer, we want to spend as little money as possible while finding the best deals—shopping at stores like H&M. On the other hand, our “citizen mind” wants workers to be treated fairly and our environment to be protected. This film highlights fast fashion as an example of Reich’s premise of the conflict between our two minds—a conflict that is all too prevalent in our modern world with giant brands like Walmart and Amazon taking over consumer markets. I hope that by thinking about the fast fashion industry with a sociological mindset, we can see past the bargain prices and address what really goes on behind the scenes.

Graham Nielsen is a Swedish student studying an interdisciplinary concentration titled “Theory and Practice: Digital Media and Society” at Hamilton college. He’s interested in advertising, marketing, video editing, fashion, as well as film and television culture and video editing.

Read More:

Links to the original look books: Men Women

Kant, R. (2012) “Textile dyeing industry an environmental hazard.” Natural Science4, 22-26. doi: 10.4236/ns.2012.41004.

Annamma J., J. F. Sherry Jr, A. Venkatesh, J. Wang & R. Chan (2012) “Fast Fashion, Sustainability, and the Ethical Appeal of Luxury Brands.” Fashion Theory, 16:3, 273-295, DOI: 10.2752/175174112X13340749707123

Aspers, P., and F. Godart (2013) “Sociology of Fashion: Order and Change.” Annual Review of Sociology  39:1, 171-192

(View original at https://thesocietypages.org/socimages)

CryptogramAnalyzing IoT Security Best Practices

New research: "Best Practices for IoT Security: What Does That Even Mean?" by Christopher Bellman and Paul C. van Oorschot:

Abstract: Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) "best practice" means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practices to achieve those outcomes. How do best practices, good practices, and standard practices differ? Or guidelines, recommendations, and requirements? Can something be a best practice if it is not actionable? We consider categories of best practices, and how they apply over the lifecycle of IoT devices. For concreteness in our discussion, we analyze and categorize a set of 1014 IoT security best practices, recommendations, and guidelines from industrial, government, and academic sources. As one example result, we find that about 70\% of these practices or guidelines relate to early IoT device lifecycle stages, highlighting the critical position of manufacturers in addressing the security issues in question. We hope that our work provides a basis for the community to build on in order to better understand best practices, identify and reach consensus on specific practices, and then find ways to motivate relevant stakeholders to follow them.

Back in 2017, I catalogued nineteen security and privacy guideline documents for the Internet of Things. Our problem right now isn't that we don't know how to secure these devices, it's that there is no economic or regulatory incentive to do so.

Worse Than FailureCodeSOD: Classic WTF: Pointless Revenge

As we enjoy some summer weather, we should take a moment to reflect on how we communicate with our peers. We should always do it with kindness, even when we really want revenge. Original -- Kind regards, Remy

We write a lot about unhealthy workplaces. We, and many of our readers, have worked in such places. We know what it means to lose our gruntle (becoming disgruntled). Some of us, have even been tempted to do something vengeful or petty to “get back” at the hostile environment.

Milton from 'Office Space' does not receive any cake during the a birthday celebration. He looks on, forlornly, while everyone else in the office enjoys cake.

But none of us actually have done it (I hope ?). It’s self defeating, it doesn’t actually make anything better, and even if the place we’re working isn’t, we are professionals. While it’s a satisfying fantasy, the reality wouldn’t be good for anyone. We know better than that.

Well, most of us know better than that. Harris M’s company went through a round of layoffs while flirting with bankruptcy. It was a bad time to be at the company, no one knew if they’d have a job the next day. Management constantly issued new edicts, before just as quickly recanting them, in a panicked case of somebody-do-something-itis. “Bob” wasn’t too happy with the situation. He worked on a reporting system that displayed financial data. So he hid this line in one of the main include files:

#define double float
//Kind Regards, Bob

This created some subtle bugs. It was released, and it was months before anyone noticed that the reports weren’t exactly reconciling with the real data. Bob was long gone, by that point, and Harris had to clean up the mess. For a company struggling to survive, it didn’t help or improve anything. But I’m sure Bob felt better.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

MEHow Will the Pandemic Change Things?

The Bulwark has an interesting article on why they can’t “Reopen America” [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn’t had a high death toll when compared to other pandemics of the last 100 years. People’s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society.

Transport

One thing that has been happening recently is a transition in transport. It’s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it’s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere.

Can we have mass public transport that doesn’t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones.

Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber.

Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don’t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren’t likely to drive much faster than that anyway.

Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren’t obvious people may focus on the disease risk and keep buying cars.

Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can’t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren’t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can’t use them all and can’t sell them then airline companies will go bankrupt.

It’s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don’t think it’s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive.

Entertainment

The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society?

The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There’s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by “Netflix and chill”? What will happen to all the prime real estate used by cinemas?

Professional sporting matches have been played for a TV-only audience during the pandemic. There’s no reason that they couldn’t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups?

Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that’s enough to make a significant impact on profitability for many small businesses.

When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it’s one of the few industry segments which might do better after the pandemic than before.

Work

Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It’s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree.

One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it’s not worth converting the city cinemas into useful space?

There’s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn’t getting much notice is remote access technologies for desktop support. If the IT people can’t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc?

Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren’t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won’t go up overall. Generally telecoms won’t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing.

Money

I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now.

I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well.

Business

There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies.

The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren’t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences.

The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there’s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts – this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn’t be packaged for home use in a reasonable amount of time.

Farmer’s markets alleviate some of the problems with packaging food etc. But they aren’t a good option when there’s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors.

Religion

Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

,

CryptogramCryptocurrency Pump and Dump Scams

Really interesting research: "An examination of the cryptocurrency pump and dump ecosystem":

Abstract: The surge of interest in cryptocurrencies has been accompanied by a proliferation of fraud. This paper examines pump and dump schemes. The recent explosion of nearly 2,000 cryptocurrencies in an unregulated environment has expanded the scope for abuse. We quantify the scope of cryptocurrency pump and dump schemes on Discord and Telegram, two popular group-messaging platforms. We joined all relevant Telegram and Discord groups/channels and identified thousands of different pumps. Our findings provide the first measure of the scope of such pumps and empirically document important properties of this ecosystem.

Worse Than FailureTales from the Interview: Classic WTF: Slightly More Sociable

As we continue our vacation, this classic comes from the ancient year of 2007, when "used to being the only woman in my engineering and computer science classes" was a much more common phrase. Getting a job can feel competitive, but there are certain ways you can guarantee you're gonna lose that competition. Original --Remy

Today’s Tale from the Interview comes from Shanna...

Fresh out of college, and used to being the only woman in my engineering and computer science classes, I wasn't quite sure what to expect in the real world. I happily ended up finding a development job in a company which was nowhere near as unbalanced as my college classes had been. The company was EXTREMELY small and the entire staff, except the CEO, was in one office. I ended up sitting at a desk next to the office admin, another woman who was hired a month or two after me.

A few months after I was hired, we decided we needed another developer. I found out then that there was another person who had been up for my position who almost got it instead of me. He was very technically skilled, but ended up not getting the position because I was more friendly and sociable. My boss decided to call him back in for another interview because it had been a very close decision, and they wanted to give him another chance. He was still looking for a full-time job, so he said he'd be happy to come in again. The day he showed up, he looked around the office, and then he and my boss went in to the CEO's office to discuss the position again.

The interview seemed to be going well, and he was probably on track for getting the position. Then he asked my boss, "So, did you ever end up hiring a developer last time you were looking?" My boss answered "Oh, yeah, we did." The interviewee stopped for a second and, because he'd noticed that the only different faces in the main office were myself and the office admin, said "You mean, you hired one of those GIRLS out there?"

Needless to say, the interview ended quickly after that, and he didn't get the position.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Sociological ImagesRacism & Hate Crimes in a Pandemic

Despite calls to physically distance, we are still seeing reports of racially motivated hate crimes in the media. Across the United States, anti-Asian discrimination is rearing its ugly head as people point fingers to blame the global pandemic on a distinct group rather than come to terms with underfunded healthcare systems and poorly-prepared governments.

Governments play a role in creating this rhetoric. Blaming racialized others for major societal problems is a feature of populist governments. One example is Donald Trump’s use of the phrase “Chinese virus” to describe COVID-19. Stirring up racialized resentment is a political tactic used to divert responsibility from the state by blaming racialized members of the community for a global virus. Anti-hate groups are increasingly concerned that the deliberate dehumanization of Asian populations by the President may also fuel hate as extremist groups take this opportunity to create social division.

Unfortunately, this is not new and it is not limited to the United States. During the SARS outbreak there were similar instances of institutional racism inherent in Canadian political and social structures as Chinese, Southeast and East Asian Canadians felt isolated at work, in hospitals, and even on public transportation. As of late May in Vancouver British Columbia Canada, there have been 29 cases of anti-Asian hate crimes.

In this crisis, it is easier for governments to use racialized people as scapegoats than to admit decisions to defund and underfund vital social and healthcare resources. By stoking racist sentiments already entrenched in the Global North, the Trump administration shifts the focus from the harm their policies will inevitably cause, such their willingness to cut funding for the CDC and NIH.

Sociological research shows how these tactics become more widespread as politicians use social media to communicate directly with citizens. creating instantaneous polarizing content. Research has also shown an association between hate crimes in the United States and anti-Muslim rhetoric expressed by Trump as early as 2015. Racist sentiments expressed by politicians have a major impact on the attitudes of the general population, because they excuse and even promote blame toward racialized people, while absolving blame from governments that have failed to act on social issues.     

Kayla Preston is an incoming PhD student in the department of sociology at the University of Toronto. Her research centers on right-wing extremism and deradicalization.


(View original at https://thesocietypages.org/socimages)

CryptogramNation-State Espionage Campaigns against Middle East Defense Contractors

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about "several hints suggesting a possible link" to the Lazarus group (aka North Korea), but that's by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we've seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

Worse Than FailureClassic WTF: The Developmestuction Environment

We continue to enjoy a brief respite from mining horrible code and terrible workplaces. This classic includes this line: "It requires that… Adobe Indesign is installed on the web server." Original --Remy

Have you ever thought what it would take for you to leave a new job after only a few days? Here's a fun story from my colleague Jake Vinson, whose co-worker of three days would have strongly answered "this."

One of the nice thing about externalizing connection strings is that it's easy to duplicate a database, duplicate the application's files, change the connection string to point to the new database, and bam, you've got a test environment.

From the programmer made famous by tblCalendar and the query string parameter admin=false comes what I think is the most creatively stupid implementation of a test environment ever.

We needed a way to show changes during development of an e-commerce web site to our client. Did our programmer follow a normal method like the one listed above? It might surprise you to find out that no, he didn't.

Instead, we get this weird mutant development/test/production environment (or "developmestuction," as I call it) for not only the database, but the ASP pages. My challenge to you, dear readers, is to identify a step of the way in which there could've been a worse way to do things. I look forward to reading the comments on this post.

Take, for example, a page called productDetail.asp. Which is the test page? Why, productDetail2.asp, of course! Or perhaps test_productDetail.asp. Or perhaps if another change branched off of that, test_productDetail2.asp, or productDetail2_t.asp, or t_pD2.asp.

And the development page? productDetaildev.asp. When the changes were approved, instead of turning t_pD2.asp to productDetail.asp, the old filenames were kept, along with the old files. That means that all links to productDetail.asp became links to t_pD2.asp once they were approved. Except in places that were forgotten, or not included in the sitewide search-and-replace.

As you may've guessed, there are next to no comments, aside from the occasional ...

' Set strS2f to false
strS2f = false

Note: I've never seen more than one comment on any of the pages.

Additional note: I've never seen more than zero comments that are even remotely helpful on any of the pages

OK, so the file structure is next to impossible to understand, especially to the poor new developer we'd hired who just stopped showing up to work after 3 days on this particular project.

What was it that drove him over the edge? Well, if the arrangement of the pages wasn't enough, the database used the same conventions (as in, randomly using test_tblProducts, or tblTestProducts, tblTestProductsFinal, or all of them). Of course, what if we need a test column, rather than a whole table? Flip a coin, if it's heads, create another tblTestWhatever, or if it's tails, just add a column to the production table called test_ItemID. Oh, and there's another two copies of the complete test database, which may or may not be used.

You think I'm done, right? Wrong. All of these inconsistencies in naming are handled individually, in code on the ASP pages. For the table names that follow some remote semblance of a consistent naming convention, the table name is stored in an unencrypted cookie, which is read into dynamic SQL queries. I imagine it goes without saying that all SQL queries were dynamic with no attempts to validate the data or replace quotes.

Having personally seen this system, I want to share a fun little fact about it. It requires that, among a handful of other desktop applications, Adobe Indesign is installed on the web server. I'll leave it to your imagination as to what it could possibly require that for and the number of seconds between each interval that it opens and closes it.
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

MESquirrelmail vs Roundcube

For some years I’ve had SquirrelMail running on one of my servers for the people who like such things. It seems that the upstream support for SquirrelMail has ended (according to the SquirrelMail Wikipedia page there will be no new releases just Subversion updates to fix bugs). One problem with SquirrelMail that seems unlikely to get fixed is the lack of support for base64 encoded From and Subject fields which are becoming increasingly popular nowadays as people who’s names don’t fit US-ASCII are encoding them in their preferred manner.

I’ve recently installed Roundcube to provide an alternative. Of course one of the few important users of webmail didn’t like it (apparently it doesn’t display well on a recent Samsung Galaxy Note), so now I have to support two webmail systems.

Below is a little Perl script to convert a SquirrelMail abook file into the csv format used for importing a RoundCube contact list.

#!/usr/bin/perl

print "First Name,Last Name,Display Name,E-mail Address\n";
while(<STDIN>)
{
  chomp;
  my @fields = split(/\|/, $_);
  printf("%s,%s,%s %s,%s\n", $fields[1], $fields[2], $fields[0], $fields[4], $fields[3]);
}

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 07)

Here’s part seven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

CryptogramIdentifying a Person Based on a Photo, LinkedIn and Etsy Profiles, and Other Internet Bread Crumbs

Interesting story of how the police can identify someone by following the evidence chain from website to website.

According to filings in Blumenthal's case, FBI agents had little more to go on when they started their investigation than the news helicopter footage of the woman setting the police car ablaze as it was broadcast live May 30.

It showed the woman, in flame-retardant gloves, grabbing a burning piece of a police barricade that had already been used to set one squad car on fire and tossing it into the police SUV parked nearby. Within seconds, that car was also engulfed in flames.

Investigators discovered other images depicting the same scene on Instagram and the video sharing website Vimeo. Those allowed agents to zoom in and identify a stylized tattoo of a peace sign on the woman's right forearm.

Scouring other images ­-- including a cache of roughly 500 photos of the Philly protest shared by an amateur photographer ­-- agents found shots of a woman with the same tattoo that gave a clear depiction of the slogan on her T-shirt.

[...]

That shirt, agents said, was found to have been sold only in one location: a shop on Etsy, the online marketplace for crafters, purveyors of custom-made clothing and jewelry, and other collectibles....

The top review on her page, dated just six days before the protest, was from a user identifying herself as "Xx Mv," who listed her location as Philadelphia and her username as "alleycatlore."

A Google search of that handle led agents to an account on Poshmark, the mobile fashion marketplace, with a user handle "lore-elisabeth." And subsequent searches for that name turned up Blumenthal's LinkedIn profile, where she identifies herself as a graduate of William Penn Charter School and several yoga and massage therapy training centers.

From there, they located Blumenthal's Jenkintown massage studio and its website, which featured videos demonstrating her at work. On her forearm, agents discovered, was the same distinctive tattoo that investigators first identified on the arsonist in the original TV video.

The obvious moral isn't a new one: don't have a distinctive tattoo. But more interesting is how different pieces of evidence can be strung together in order to identify someone. This particular chain was put together manually, but expect machine learning techniques to be able to do this sort of thing automatically -- and for organizations like the NSA to implement them on a broad scale.

Another article did a more detailed analysis, and concludes that the Etsy review was the linchpin.

Note to commenters: political commentary on the protesters or protests will be deleted. There are many other forums on the Internet to discuss that.

Worse Than FailureClassic WTF: A Gassed Pump

Wow, it's summer. Already? We're taking a short break this week at TDWTF, and reaching back through the archives for some classic stories. If you've cancelled your road trip this year, make a vicarious stop at a filthy gas station with this old story. Original --Remy

“Staff augmentation,” was a fancy way of saying, “hey, contractors get more per hour, but we don’t have to provide benefits so they are cheaper,” but Stuart T was happy to get more per hour, and even happier to know that he’d be on to his next gig within a few months. That was how he ended up working for a national chain of gas-station/convenience stores. His job was to build a “new mobile experience for customer loyalty” (aka, wrapping their website up as an app that can also interact with QR codes).

At least, that’s what he was working on before Miranda stormed into his cube. “Stuart, we need your help. ProdTrack is down, and I can’t fix it, because I’ve got to be at a mandatory meeting in ten minutes.”

A close-up of a gas pump

ProdTrack was their inventory system that powered their point-of-sale terminals. For it to be down company wide was a big problem, and essentially rendered most of their stores inoperable. “Geeze, what mandatory meeting is more important than that?”

“The annual team-building exercise,” Miranda said, using a string of profanity for punctuation. “They’ve got a ‘no excuses’ policy, so I have to go, ‘or else’, but we also need to get this fixed.”

Miranda knew exactly what was wrong. ProdTrack could only support 14 product categories. But one store- store number 924- had decided that they needed 15. So they added a 15th category to the database, threw a few products into the category, and crossed their fingers. Now, all the stores were crashing.

“You’ll need to look at the StoreSQLUpdates and the StoreSQLUpdateStatements tables,” Miranda said. “And probably dig into the ProductDataPump.exe app. Just do a quick fix- we’re releasing an update supports any number of categories in three weeks or so, we just need to hold this together till then.”

With that starting point, Stuart started digging in. First, he puzzled over the tables Miranda had mentioned. StoreSQLUpdates looked like this:

IDSTATEMENTSTATEMENT_ORDER
145938DELETE FROM SupplierInfo90
148939INSERT INTO ProductInfo VALUES(12348, 3, 6)112

Was this an audit tables? What was StoreSQLUpdateStatements then?

IDSTATEMENT
168597INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘DELETE FROM SupplierInfo’, 90)
168598INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘INSERT INTO ProductInfo VALUES(12348, 3, 6)’, 112)

Stuart stared at his screen, and started asking questions. Not questions about what he was looking at, but questions about the life choices that had brought him to this point, questions about whether it was really that bad an idea to start drinking at work, and questions about the true nature of madness- if the world was mad, and he was the only sane person left, didn’t that make him the most insane person of all?

He hoped the mandatory team building exercise was the worst experience of Miranda’s life, as he sent her a quick, “WTF?” email message. She obviously still had her cellphone handy, as she replied minutes later:

Oh, yeah, that’s for data-sync. Retail locations have flaky internet, and keep a local copy of the data. That’s what’s blowing up. Check ProductDataPump.exe.

Stuart did. ProductDataPump.exe was a VB.Net program in a single file, with one beautifully named method, RunIt, that contained nearly 2,000 lines of code. Some saintly soul had done him the favor of including a page of documentation at the top of the method, and it started with an apology, then explained the data flow.

Here’s what actually happened: a central database at corporate powered ProdTrack. When any data changed there, those changes got logged into StoreSQLUpdateStatements. A program called ProductDataShift.exe scanned that table, and when new rows appeared, it executed the statements in StoreSQLUpdateStatements (which placed the actual DML commands into StoreSQLUpdates).

Once an hour, ProductDataPump.exe would run. It would attempt to connect to each retail location. If it could, it would read the contents of the central StoreSQLUpdates and the local StoreSQLUpdates, sorting by the order column, and through a bit of faith and luck, would hopefully synchronize the two databases.

Buried in the 2,000 line method, at about line 1,751, was a block that actually executed the statements:

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
        End If
    Next sTmp
End If

Once he was done screaming at the insanity of the entire process, Stuart looked at the way product categories worked. Store 924 didn’t carry anything in the ALCOHOL category, due to state Blue Laws, but had added a PRODUCE category. None of the other stores had a PRODUCE category (if they carried any produce, they just put it in PREPARED_FOODS). Fixing the glitch that caused the application to crash when it had too many categories would take weeks, at least- and Miranda already told him a fix was coming. All he had to do was keep it from crashing until then.

Into the StoreSQLUpdates table, he added a DELETE statement that would delete every category that contained zero items. That would fix the immediate problem, but when the ProductDataPump.exe ran, it would just copy the broken categories back around. So Stuart patched the program with the worst fix he ever came up with.

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            If nStoreNumber = 924 And sTmp.Contains("ALCOHOL") Then
                Continue For
            ElseIf nStoreNumber <> 924 And sTmp.Contains("PRODUCE") Then
                Continue For
            Else
                SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
            End If
        End If
    Next sTmp
End If
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Krebs on Security‘BlueLeaks’ Exposes Files from Hundreds of Police Departments

Hundreds of thousands of potentially sensitive files from police departments across the United States were leaked online last week. The collection, dubbed “BlueLeaks” and made searchable online, stems from a security breach at a Texas web design and hosting company that maintains a number of state law enforcement data-sharing portals.

The collection — nearly 270 gigabytes in total — is the latest release from Distributed Denial of Secrets (DDoSecrets), an alternative to Wikileaks that publishes caches of previously secret data.

A partial screenshot of the BlueLeaks data cache.

In a post on Twitter, DDoSecrets said the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources,” and that “among the hundreds of thousands of documents are police and FBI reports, bulletins, guides and more.”

Fusion centers are state-owned and operated entities that gather and disseminate law enforcement and public safety information between state, local, tribal and territorial, federal and private sector partners.

KrebsOnSecurity obtained an internal June 20 analysis by the National Fusion Center Association (NFCA), which confirmed the validity of the leaked data. The NFCA alert noted that the dates of the files in the leak actually span nearly 24 years — from August 1996 through June 19, 2020 — and that the documents include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files.

“Additionally, the data dump contains emails and associated attachments,” the alert reads. “Our initial analysis revealed that some of these files contain highly sensitive information such as ACH routing numbers, international bank account numbers (IBANs), and other financial data as well as personally identifiable information (PII) and images of suspects listed in Requests for Information (RFIs) and other law enforcement and government agency reports.”

The NFCA said it appears the data published by BlueLeaks was taken after a security breach at Netsential, a Houston-based web development firm.

“Preliminary analysis of the data contained in this leak suggests that Netsential, a web services company used by multiple fusion centers, law enforcement, and other government agencies across the United States, was the source of the compromise,” the NFCA wrote. “Netsential confirmed that this compromise was likely the result of a threat actor who leveraged a compromised Netsential customer user account and the web platform’s upload feature to introduce malicious content, allowing for the exfiltration of other Netsential customer data.”

Reached via phone Sunday evening, Netsential Director Stephen Gartrell declined to comment for this story.

The NFCA said a variety of cyber threat actors, including nation-states, hacktivists, and financially-motivated cybercriminals, might seek to exploit the data exposed in this breach to target fusion centers and associated agencies and their personnel in various cyber attacks and campaigns.

The BlueLeaks data set was released June 19, also known as “Juneteenth,” the oldest nationally celebrated commemoration of the ending of slavery in the United States. This year’s observance of the date has generated renewed public interest in the wake of widespread protests against police brutality and the filmed killing of George Floyd at the hands of Minneapolis police.

Stewart Baker, an attorney at the Washington, D.C. office of Steptoe & Johnson LLP and a former assistant secretary of policy at the U.S. Department of Homeland Security, said the BlueLeaks data is unlikely to shed much light on police misconduct, but could expose sensitive law enforcement investigations and even endanger lives.

“With this volume of material, there are bound to be compromises of sensitive operations and maybe even human sources or undercover police, so I fear it will put lives at risk,” Baker said. “Every organized crime operation in the country will likely have searched for their own names before law enforcement knows what’s in the files, so the damage could be done quickly. I’d also be surprised if the files produce much scandal or evidence of police misconduct. That’s not the kind of work the fusion centers do.”

,

Planet Linux AustraliaDavid Rowe: Open IP over VHF/UHF

I’ve recently done a lot of work on the Codec 2 FSK modem, here is the new README_fsk. It now works at lower SNRs, has been refactored, and is supported by a suite of automated tests.

There is some exciting work going on with Codec 2 modems and VHF/UHF IP links using TAP/TUN (thanks Tomas and Jeroen) – a Linux technology for building IP links from user space “data pumps” – like the Codec 2 modems.

My initial goal for this work is a “100 kbit/s IP link” for VHF/UHF using Codec 2 modems and SDR. One application is moving Ham Radio data past the 1995-era “9600 bits/s data port” paradigm to real time IP.

I’m also interested in IP over TV Whitespace (spare VHF/UHF spectrum) for emergency and developing world applications. I’m a judge for developing world IT grants and the “last 100km” problem comes up again and again. This solution requires just a Raspberry Pi and RTLSDR. At these frequencies antennas could be simply fabricated from wire (cut for the frequency of operation), and soldered directly to the Pi.

Results and Link Budget

As a first step, I’ve taken another look at using RpiTx for FSK, this time at VHF and starting at a modest 10 kbits/s. Over the weekend I performed some Minimum Detectable Signal (MDS) tests and confirmed the 2FSK modem built with RpiTx, a RTL-SDR, and the Codec 2 modem is right on theory at 10 kbits/s, with a MDS of -120dBm.

Putting this in context, a UHF signal has a path loss of 125dB over 100km. So if you have a line of site path, a 10mW (10dBm) signal will be 10-125 = -115dBm at your receiver (assuming basic 0dBi antennas). As -115dBm is greater than the -120dBm MDS, this means your data will be received error free (especially when we add forward error correction). We have sufficient “link margin” and the “link” is closed.

While our 10 kbits/s starting point doesn’t sound like much – even at that rate we get to send 10000*3600*24/8/140 = 771,000 140 byte text messages each day to another station on your horizon. That’s a lot of connectivity in an emergency or when the alternative where you live is nothing.

Method

I’m using the GitHub PR as a logbook for the work, I quite like GitHub and Markdown. This weekends MDS experiments start here.

I had the usual fun and games with attenuating the Rx signal from the Pi down to -120dBm. The transmit signal tries hard to leak around the attenuators via a RF path. I moved the unshielded Pi into another room, and built a “plastic bag and aluminium foil” Faraday cage which worked really well:


These are complex systems and many things can go wrong. Are your Tx/Rx sample clocks close enough? Is your rx signal clipping? Is the gain of your radio sufficient to reduce quantisation noise? Bug in your modem code? DC line in your RTLSDR signal? Loose SMA connector?

I’ve learnt the hard way to test very carefully at each step. First, I run off air samples through a non-real time modem Octave simulation to visualise what’s going on inside the modem. A software oscilloscope.

An Over the Cable (OTC) test is essential before trying Over the Air (OTA) as it gives you a controlled environment to spot issues. MDS tests that measure the Bit error Rate (BER) are also excellent, they effectively absorb every factor in the system and give you an overall score (the Bit Error Rate) you can compare to theory.

Spectral Purity

Here is the spectrum of the FSK signal for a …01010… sequence at 100 kbit/s, at two resolution bandwidths:

The Tx power is about 10dBm, this plot is after some attenuation. I haven’t carefully checked the spurious levels, but the above looks like around -40dBc (off a low 10mW EIRP) over this 1MHz span. If I am reading the Australian regulations correctly (Section 7A of the Amateur LCD) the requirement is 43+10log(P) = 43+10log10(0.01) = 23dBc, so we appear to pass.

Conclusion

This is “extreme open source”. The transmitter is software, the modem is software. All open source and free as in beer and speech. No chipsets or application specific radio hardware – just some CPU cycles and a down converter supplied by the Pi and RTLSDR. The only limits are those of physics – which we have reached with the MDS tests.

Reading Further

FSK modem support for TAP/TUN – GitHub PR for this work
Testing a RTL-SDR with FSK on HF
High Speed Balloon Data Links
Codec 2 FSK modem README – includes lots of links and sample applications.

,

CryptogramFriday Squid Blogging: Giant Squid Washes Up on South African Beach

Fourteen feet long and 450 pounds. It was dead before it washed up.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityTurn on MFA Before Crooks Do It For You

Hundreds of popular websites now offer some form of multi-factor authentication (MFA), which can help users safeguard access to accounts when their password is breached or stolen. But people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control. Here’s the story of one such incident.

As a career chief privacy officer for different organizations, Dennis Dayman has tried to instill in his twin boys the importance of securing their online identities against account takeovers. Both are avid gamers on Microsoft’s Xbox platform, and for years their father managed their accounts via his own Microsoft account. But when the boys turned 18, they converted their child accounts to adult, effectively taking themselves out from under their dad’s control.

On a recent morning, one of Dayman’s sons found he could no longer access his Xbox account. The younger Dayman admitted to his dad that he’d reused his Xbox profile password elsewhere, and that he hadn’t enabled multi-factor authentication for the account.

When the two of them sat down to reset his password, the screen displayed a notice saying there was a new Gmail address tied to his Xbox account. When they went to turn on multi-factor authentication for his son’s Xbox profile — which was tied to a non-Microsoft email address — the Xbox service said it would send a notification of the change to unauthorized Gmail account in his profile.

Wary of alerting the hackers that they were wise to their intrusion, Dennis tried contacting Microsoft Xbox support, but found he couldn’t open a support ticket from a non-Microsoft account. Using his other son’s Outlook account, he filed a ticket about the incident with Microsoft.

Dennis soon learned the unauthorized Gmail address added to his son’s hacked Xbox account also had enabled MFA. Meaning, his son would be unable to reset the account’s password without approval from the person in control of the Gmail account.

Luckily for Dayman’s son, he hadn’t re-used the same password for the email address tied to his Xbox profile. Nevertheless, the thieves began abusing their access to purchase games on Xbox and third-party sites.

“During this period, we started realizing that his bank account was being drawn down through purchases of games from Xbox and [Electronic Arts],” Dayman the elder recalled. “I pulled the recovery codes for his Xbox account out of the safe, but because the hacker came in and turned on multi-factor, those codes were useless to us.”

Microsoft support sent Dayman and his son a list of 20 questions to answer about their account, such as the serial number on the Xbox console originally tied to the account when it was created. But despite answering all of those questions successfully, Microsoft refused to let them reset the password, Dayman said.

“They said their policy was not to turn over accounts to someone who couldn’t provide the second factor,” he said.

Dayman’s case was eventually escalated to Tier 3 Support at Microsoft, which was able to walk him through creating a new Microsoft account, enabling MFA on it, and then migrating his son’s Xbox profile over to the new account.

Microsoft told KrebsOnSecurity that while users currently are not prompted to enable two-step verification upon sign-up, they always have the option to enable the feature.

“Users are also prompted shortly after account creation to add additional security information if they have not yet done so, which enables the customer to receive security alerts and security promotions when they login to their account,” the company said in a written statement. “When we notice an unusual sign-in attempt from a new location or device, we help protect the account by challenging the login and send the user a notification. If a customer’s account is ever compromised, we will take the necessary steps to help them recover the account.”

Certainly, not enabling MFA when it is offered is far more of a risk for people in the habit of reusing or recycling passwords across multiple sites. But any service to which you entrust sensitive information can get hacked, and enabling multi-factor authentication is a good hedge against having leaked or stolen credentials used to plunder your account.

What’s more, a great many online sites and services that do support multi-factor authentication are completely automated and extremely difficult to reach for help when account takeovers occur. This is doubly so if the attackers also can modify and/or remove the original email address associated with the account.

KrebsOnSecurity has long steered readers to the site twofactorauth.org, which details the various MFA options offered by popular websites. Currently, twofactorauth.org lists nearly 900 sites that have some form of MFA available. These range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

CryptogramSecurity and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It's being hosted by the University of Cambridge, which in today's world means we're all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We've done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year's schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

MEStorage Trends

In considering storage trends for the consumer side I’m looking at the current prices from MSY (where I usually buy computer parts). I know that other stores will have slightly different prices but they should be very similar as they all have low margins and wholesale prices are the main factor.

Small Hard Drives Aren’t Viable

The cheapest hard drive that MSY sells is $68 for 500G of storage. The cheapest SSD is $49 for 120G and the second cheapest is $59 for 240G. SSD is cheaper at the low end and significantly faster. If someone needed about 500G of storage there’s a 480G SSD for $97 which costs $29 more than a hard drive. With a modern PC if you have no hard drives you will notice that it’s quieter. For anyone who’s buying a new PC spending an extra $29 is definitely worthwhile for the performance, low power use, and silence.

The cheapest 1TB disk is $69 and the cheapest 1TB SSD is $159. Saving $90 on the cost of a new PC probably isn’t worth while.

For 2TB of storage the cheapest options are Samsung NVMe for $339, Crucial SSD for $335, or a hard drive for $95. Some people would choose to save $244 by getting a hard drive instead of NVMe, but if you are getting a whole system then allocating $244 to NVMe instead of a faster CPU would probably give more benefits overall.

Computer stores typically have small margins and computer parts tend to quickly either become cheaper or be obsoleted by better parts. So stores don’t want to stock parts unless they will sell quickly. Disks smaller than 2TB probably aren’t going to be profitable for stores for very long. The trend of SSD and NVMe becoming cheaper is going to make 2TB disks non-viable in the near future.

NVMe vs SSD

M.2 NVMe devices are at comparable prices to SATA SSDs. For some combinations of quality and capacity NVMe is about 50% more expensive and for some it’s slightly cheaper (EG Intel 1TB NVMe being cheaper than Samsung EVO 1TB SSD). Last time I checked about half the motherboards on sale had a single M.2 socket so for a new workstation that doesn’t need more than 2TB of storage (the largest NVMe that MSY sells) it wouldn’t make sense to use anything other than NVMe.

The benefit of NVMe is NOT throughput (even though NVMe devices can often sustain over 4GB/s), it’s low latency. Workstations can’t properly take advantage of this because RAM is so cheap ($198 for 32G of DDR4) that compiles etc mostly come from cache and because most filesystem writes on workstations aren’t synchronous. For servers a large portion of writes are synchronous, for example a mail server can’t acknowledge receiving mail until it knows that it’s really on disk, so there’s a lot of small writes that block server processes and the low latency of NVMe really improves performance. If you are doing a big compile on a workstation (the most common workstation task that uses a lot of disk IO) then the writes aren’t synchronised to disk and if the system crashes you will just do all the compilation again. While NVMe doesn’t give a lot of benefit over SSD for workstation use (I’ve uses laptops with SSD and NVMe and not noticed a great difference) of course I still want better performance. ;)

Last time I checked I couldn’t easily buy a PCIe card that supported 2*NVMe cards, I’m sure they are available somewhere but it would take longer to get and probably cost significantly more than twice as much. That means a RAID-1 of NVMe takes 2 PCIe slots if you don’t have an M.2 socket on the motherboard. This was OK when I installed 2*NVMe devices on a server that had 18 disks and lots of spare PCIe slots. But for some systems PCIe slots are an issue.

My home server has all PCIe slots used by a video card and Ethernet cards and the BIOS probably won’t support booting from NVMe. It’s a Dell server so I can’t just replace the motherboard with one that has more PCIe slots and M.2 on the motherboard. As it’s running nicely and doesn’t need replacing any time soon I won’t be using NVMe for home server stuff.

Small Servers

Most servers that I am responsible for have less than 2TB of storage. For my clients I now only recommend SSD storage for small servers and am recommending SSD for replacing any failed disks.

My home server has 2*500G SSDs in a BTRFS RAID-1 for the root filesystem, and 3*4TB disks in a BTRFS RAID-1 for storing big files. I bought the SSDs when 500G SSDs were about $250 each and bought 2*4TB disks when they were about $350 each. Currently that server has about 3.3TB of space used and I could probably get it down to about 2.5TB if I deleted things I don’t really need. If I was getting storage for that server now I’d use 2*2TB SSDs and 3*1TB hard drives for the stuff that doesn’t fit on SSDs (I have some spare 1TB disks that came with servers). If I didn’t have spare hard drives I’d get 3*2TB SSDs for that sort of server which would give 3TB of BTRFS RAID-1 storage.

Last time I checked Dell servers had a card for supporting M.2 as an optional extra so Dells probably won’t boot from NVMe without extra expense.

Ars Technica has an informative article about WD selling SMR disks as “NAS” disks [1]. The Shingled Magnetic Recording technology allows greater storage density on a platter which leads to either larger capacity or cheaper disks but at the cost of lower write performance and apparently extremely bad latency in some situations. NAS disks are supposed to be low latency as the expectation is that they will be used in a RAID array and kicked out of the array if they have problems. There are reports of ZFS kicking SMR disks from RAID sets. I think this will end the use of hard drives for small servers. For a server you don’t want to deal with this sort of thing, by definition when a server goes down multiple people will stop work (small server implies no clustering). Spending extra to get SSDs just to avoid the risk of unexpected SMR would be a good plan.

Medium Servers

The largest SSD and NVMe devices that are readily available are 2TB but 10TB disks are commodity items, there are reports of 20TB hard drives being available but I can’t find anyone in Australia selling them.

If you need to store dozens or hundreds of terabytes than hard drives have to be part of the mix at this time. There’s no technical reason why SSDs larger than 10TB can’t be made (the 2.5″ SATA form factor has more than 5* the volume of a 2TB M.2 card) and it’s likely that someone sells them outside the channels I buy from, but probably at a price higher than what my clients are willing to pay. If you want 100TB of affordable storage then a mid range server like the Dell PowerEdge T640 which can have up to 18*3.5″ disks is good. One of my clients has a PowerEdge T630 with 18*3.5″ disks in the 8TB-10TB range (we replace failed disks with the largest new commodity disks available, it used to have 6TB disks). ZFS version 0.8 introduced a “Special VDEV Class” which stores metadata and possibly small data blocks on faster media. So you could have some RAID-Z groups on hard drives for large storage and the metadata on a RAID-1 on NVMe for fast performance. For medium size arrays on hard drives having a “find /” operation take hours is not uncommon, for large arrays having it take days isn’t that uncommon. So far it seems that ZFS is the only filesystem to have taken the obvious step of storing metadata on SSD/NVMe while bulk data is on cheap large disks.

One problem with large arrays is that the vibration of disks can affect the performance and reliability of nearby disks. The ZFS server I run with 18 disks was originally setup with disks from smaller servers that never had ZFS checksum errors, but when disks from 2 small servers were put in one medium size server they started getting checksum errors presumably due to vibration. This alone is a sufficient reason for paying a premium for SSD storage.

Currently the cost of 2TB of SSD or NVMe is between the prices of 6TB and 8TB hard drives, and the ratio of price/capacity for SSD and NVMe is improving dramatically while the increase in hard drive capacity is slow. 4TB SSDs are available for $895 compared to a 10TB hard drive for $549, so it’s 4* more expensive on a price per TB. This is probably good for Windows systems, but for Linux systems where ZFS and “special VDEVs” is an option it’s probably not worth considering. Most Linux user cases where 4TB SSDs would work well would be better served by smaller NVMe and 10TB disks running ZFS. I don’t think that 4TB SSDs are at all popular at the moment (MSY doesn’t stock them), but prices will come down and they will become common soon enough. Probably by the end of the year SSDs will halve in price and no hard drives less than 4TB will be viable.

For rack mounted servers 2.5″ disks have been popular for a long time. It’s common for vendors to offer 2 versions of a rack mount server for 2.5″ and 3.5″ disks where the 2.5″ version takes twice as many disks. If the issue is total storage in a server 4TB SSDs can give the same capacity as 8TB HDDs.

SMR vs Regular Hard Drives

Rumour has it that you can buy 20TB SMR disks, I haven’t been able to find a reference to anyone who’s selling them in Australia (please comment if you know who sells them and especially if you know the price). I expect that the ZFS developers will soon develop a work-around to solve the problems with SMR disks. Then arrays of 20TB SMR disks with NVMe for “special VDEVs” will be an interesting possibility for storage. I expect that SMR disks will be the majority of the hard drive market by 2023 – if hard drives are still on the market. SSDs will be large enough and cheap enough that only SMR disks will offer enough capacity to be worth using.

I think that it is a possibility that hard drives won’t be manufactured in a few years. The volume of a 3.5″ disk is significantly greater than that of 10 M.2 devices so current technology obviously allows 20TB of NVMe or SSD storage in the space of a 3.5″ disk. If the price of 16TB NVMe and SSD devices comes down enough (to perhaps 3* the price of a 20TB hard drive) almost no-one would want the hard drive and it wouldn’t be viable to manufacture them.

It’s not impossible that in a few years time 3D XPoint and similar fast NVM technologies occupy the first level of storage (the ZFS “special VDEV”, OS swap device, log device for database servers, etc) and NVMe occupies the level for bulk storage with no space left in the market for spinning media.

Computer Cases

For servers I expect that models supporting 3.5″ storage devices will disappear. A 1RU server with 8*2.5″ storage devices or a 2RU server with 16*2.5″ storage devices will probably be of use to more people than a 1RU server with 4*3.5″ or a 2RU server with 8*3.5″.

My first IBM PC compatible system had a 5.25″ hard drive, a 5.25″ floppy drive, and a 3.5″ floppy drive in 1988. My current PC is almost a similar size and has a DVD drive (that I almost never use) 5 other 5.25″ drive bays that have never been used, and 5*3.5″ drive bays that I have never used (I have only used 2.5″ SSDs). It would make more sense to have PC cases designed around 2.5″ and maybe 3.5″ drives with no more than one 5.25″ drive bay.

The Intel NUC SFF PCs are going in the right direction. Many of them only have a single storage device but some of them have 2*M.2 sockets allowing RAID-1 of NVMe and some of them support ECC RAM so they could be used as small servers.

A USB DVD drive costs $36, it doesn’t make sense to have every PC designed around the size of an internal DVD drive that will probably only be used to install the OS when a $36 USB DVD drive can be used for every PC you own.

The only reason I don’t have a NUC for my personal workstation is that I get my workstations from e-waste. If I was going to pay for a PC then a NUC is the sort of thing I’d pay to have on my desk.

CryptogramNew Hacking-for-Hire Company in India

Citizen Lab has a new report on Dark Basin, a large hacking-for-hire company in India.

Key Findings:

  • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.

  • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.

  • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.

  • We link Dark Basin with high confidence to an Indian company, BellTroX InfoTech Services, and related entities.

  • Citizen Lab has notified hundreds of targeted individuals and institutions and, where possible, provided them with assistance in tracking and identifying the campaign. At the request of several targets, Citizen Lab shared information about their targeting with the US Department of Justice (DOJ). We are in the process of notifying additional targets.

BellTroX InfoTech Services has assisted clients in spying on over 10,000 email accounts around the world, including accounts of politicians, investors, journalists and activists.

News article. Boing Boing post

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2020 Workshop: Emergency Security Discussion

Jun 20 2020 12:30
Jun 20 2020 14:30
Jun 20 2020 12:30
Jun 20 2020 14:30
Location: 
Online event (TBA)

On Friday morning, our prime minister held an unprecedented press conference to warn Australia (Governments, Industry & Individuals) about a sophisticated cyber attack that is currently underway.

 

 

Linux Users of Victoria is a subcommittee of Linux Australia.

June 20, 2020 - 12:30

read more

CryptogramZoom Will Be End-to-End Encrypted for All Users

Zoom is doing the right thing: it's making end-to-end encryption available to all users, paid and unpaid. (This is a change; I wrote about the initial decision here.)

...we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform. This will enable us to offer E2EE as an advanced add-on feature for all of our users around the globe -- free and paid -- while maintaining the ability to prevent and fight abuse on our platform.

To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools -- including our Report a User function -- we can continue to prevent and fight abuse.

Thank you, Zoom, for coming around to the right answer.

And thank you to everyone for commenting on this issue. We are learning -- in so many areas -- the power of continued public pressure to change corporate behavior.

EDITED TO ADD (6/18): Let's do Apple next.

Worse Than FailureError'd: Fast Hail and Round Wind

"He's not wrong. With wind and hail like this, an isolated tornado definitely ranks third in severity," Rob K. writes.

 

"Upon linking my Days of Wonder account with Steam, I was initially told that I had 7 days to verify my email before account deletion and then I was told something else..." Ian writes.

 

Harvey wrote, "Great. Thanks for the warm welcome to your site ${AUCTION_WEBSITE}"

 

Peter G. writes, "In this case, I imagine the art department did something like 'OK Google, find image of Pentagon, insert into document'."

 

"I'm happy with my efforts but I feel for Terri. 1,400km in 21 days, 200km in the lead and she's barely overcome by this 'NaN' individual," wrote Roger G.

 

Sam writes, "While I admire the honesty of this particular scammer, I do rather think they missed the point."

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityFEMA IT Specialist Charged in ID Theft, Tax Refund Fraud Conspiracy

An information technology specialist at the Federal Emergency Management Agency (FEMA) was arrested this week on suspicion of hacking into the human resource databases of University of Pittsburgh Medical Center (UPMC) in 2014, stealing personal data on more than 65,000 UPMC employees, and selling the data on the dark web.

On June 16, authorities in Michigan arrested 29-year-old Justin Sean Johnson in connection with a 43-count indictment on charges of conspiracy, wire fraud and aggravated identity theft.

Federal prosecutors in Pittsburgh allege that in 2013 and 2014 Johnson hacked into the Oracle PeopleSoft databases for UPMC, a $21 billion nonprofit health enterprise that includes more than 40 hospitals.

According to the indictment, Johnson stole employee information on all 65,000 then current and former employees, including their names, dates of birth, Social Security numbers, and salaries.

The stolen data also included federal form W-2 data that contained income tax and withholding information, records that prosecutors say Johnson sold on dark web marketplaces to identity thieves engaged in tax refund fraud and other financial crimes. The fraudulent tax refund claims made in the names of UPMC identity theft victims caused the IRS to issue $1.7 million in phony refunds in 2014.

“The information was sold by Johnson on dark web forums for use by conspirators, who promptly filed hundreds of false form 1040 tax returns in 2014 using UPMC employee PII,” reads a statement from U.S. Attorney Scott Brady. “These false 1040 filings claimed hundreds of thousands of dollars of false tax refunds, which they converted into Amazon.com gift cards, which were then used to purchase Amazon merchandise which was shipped to Venezuela.”

Johnson could not be reached for comment. At a court hearing in Pittsburgh this week, a judge ordered the defendant to be detained pending trial. Johnson’s attorney declined to comment on the charges.

Prosecutors allege Johnson’s intrusion into UPMC was not an isolated occurrence, and that for several years after the UPMC hack he sold personally identifiable information (PII) to buyers on dark web forums.

The indictment says Johnson used the hacker aliases “DS and “TDS” to market the stolen records to identity thieves on the Evolution and AlphaBay dark web marketplaces. However, archived copies of the now-defunct dark web forums indicate those aliases are merely abbreviations that stand for “DearthStar” and “TheDearthStar,” respectively.

“You can expect good things come tax time as I will have lots of profiles with verified prior year AGIs to make your refund filing 10x easier,” TheDearthStar advertised in an August 2015 message to AlphaBay members.

In some cases, it appears these DearthStar identities were actively involved in not just selling PII and tax refund fraud, but also stealing directly from corporate payrolls.

In an Aug. 2015 post to AlphaBay titled “I’d like to stage a heist but…,” TheDearthStar solicited people to help him cash out access he had to the payroll systems of several different companies:

“… I have nowhere to send the money. I’d like to leverage the access I have to payroll systems of a few companies and swipe a chunk of their payroll. Ideally, I’d like to find somebody who has a network of trusted individuals who can receive ACH deposits.”

When another AlphaBay member asks how much he can get, TheDearthStar responds, “Depends on how many people end up having their payroll records ‘adjusted.’ Could be $1,000 could be $100,000.”

2014 and 2015 were particularly bad years for tax refund fraud, a form of identity theft which cost taxpayers and the U.S. Treasury billions of dollars. In April 2014, KrebsOnSecurity wrote about a spike in tax refund fraud perpetrated against medical professionals that caused many to speculate that one or more major healthcare providers had been hacked.

A follow-up story that same month examined the work of a cybercrime gang that was hacking into HR departments at healthcare organizations across the country and filing fraudulent tax refund requests with the IRS on employees of those victim firms.

The Justice Department’s indictment quotes from Johnson’s online resume as stating that he is proficient at installing and administering Oracle PeopleSoft systems. A LinkedIn resume for a Justin Johnson from Detroit says the same, and that for the past five months he has served as an information technology specialist at FEMA. A Facebook profile with the same photo belongs to a Justin S. Johnson from Detroit.

Johnson’s resume also says he was self-employed for seven years as a “cyber security researcher / bug bounty hunter” who was ranked in the top 1,000 by reputation on Hacker One, a program that rewards security researchers who find and report vulnerabilities in software and web applications.

,

LongNowRacial Injustice & Long-term Thinking

Long Now Community,

Since 01996, Long Now has endeavored to foster long-term thinking in the world by sharing perspectives on our deep past and future. We have too often failed, however, to include and listen to Black perspectives.

Racism is a long-term civilizational problem with deep roots in the past, profound effects in the present, and so many uncertain futures. Solving the multigenerational challenge of racial inequality requires many things, but long-term thinking is undoubtedly one of them. As an institution dedicated to the long view, we have not addressed this issue enough. We can and will do better.

We are committed to surfacing these perspectives on both the long history of racial inequality and possible futures of racial justice going forward, both through our speaker series and in the resources we share online. And if you have any suggestions for future resources or speakers, we are actively looking.

Alexander Rose

Executive Director, Long Now

Learn More

  • A recent episode of this American Life explored Afrofuturism: “It’s more than sci-fi. It’s a way of looking at black culture that’s fantastic, creative, and oddly hopeful—which feels especially urgent during a time without a lot of optimism.”
  • The 1619 Project from The New York Times, winner of the 02020 Pulitzer Prize for Commentary,  re-examines the 400 year-legacy of slavery. 
  • A paper from political scientists at Stanford and Harvard analyzes the long-term effects of slavery on Southern attitudes toward race and politics.
  • Ava DuVernay’s 13th is a Netflix documentary about the links between slavery and the US penal system. It is available to watch on YouTube for free here
  • “The Case for Reparations” is a landmark essay by Ta-Nehisi Coates about the institutional racism of housing discrimination. 
  • I Am Not Your Negro is a documentary exploring the history of racism in the United States through the lens of the life of writer and activist James Baldwin. The documentary is viewable on PBS for free here.
  • Science Fiction author N.K. Jemisin defies convention and creates worlds informed by the structural forces that cause inequality.

,

LongNowLong-term Perspectives During a Pandemic

Long Conversation speakers (from top left): Stewart Brand, Esther Dyson, David Eagleman, Ping fu, Katherine Fulton, Danny Hillis, Kevin Kelly, Ramez Naam, Alexander Rose, Paul Saffo, Peter Schwartz, Tiffany Shlain, Bina Venkataraman, and Geoffrey West.

On April 14th, 02020, The Long Now Foundation convened a Long Conversation¹ featuring members of our board and invited speakers. Over almost five hours of spirited discussion, participants reflected on the current moment, how it fits into our deeper future, and how we can address threats to civilization that are rare but ultimately predictable. The following are excerpts from the conversation, edited and condensed for clarity.

Stewart Brand is co-founder and President of The Long Now Foundation. Photograph: Mark Mahaney/Redux.

The Pandemic is Practice for Climate Change

Stewart Brand

I see the pandemic as practice for dealing with a much tougher problem, and one that has a different timescale, which is climate change. And this is where Long Now comes in, where now, after this — and it will sort out, it’ll take a lot longer to sort out than people think, but some aspects are already sorting out faster than people expected. As this thing sorts out and people arise and say: Well, that was weird and terrible. And we lost so and so, and so and so, and so and so, and so we grieve. But it looks like we’re coming through it. Humans in a sense caused it by making it so that bat viruses could get into humans more easily, and then connecting in a way that the virus could get to everybody. But also humans are able to solve it.

Well, all of that is almost perfectly mapped onto climate change, only with a different timescale. In a sense everybody’s causing it: by being part of a civilization, running it at a much higher metabolic rate, using that much more energy driven by fossil fuels, which then screwed up the atmosphere enough to screw up the climate enough to where it became a global problem caused by basically the global activity of everybody. And it’s going to engage global solutions.

Probably it’ll be innovators in cities communicating with other innovators in other cities who will come up with the needed set of solutions to emerge, and to get the economy back on its legs, much later than people want. But nevertheless, it will get back, and then we’ll say, “Okay, well what do you got next?” Because there’ll now be this point of reference. And it’ll be like, “If we can put a man on the moon, we should be able to blah, blah, blah.” Well, if we can solve the coronavirus, and stop a plague that affected everybody, we should be able to do the same damn thing for climate.

Watch Stewart Brand’s conversation with Geoffrey West.

Watch Stewart Brand’s conversation with Alexander Rose.


Esther Dyson is an investor, consultant, and Long Now Board member. Photograph: Christopher Michel.

The Impact of the Pandemic on Children’s Ability to Think Long-term

Esther Dyson

We are not building resilience into the population. I love the Long Now; I’m on the board. But at the same time we’re pretty intellectual. Thinking long-term is psychological. It’s what you learn right at the beginning. It’s not just an intellectual, “Oh gee, I’m going to be a long-term thinker. I’m going to read three books and understand how to do it.” It’s something that goes deep back into your past.

You know the marshmallow test, the famous Stanford test where you took a two-year-old and you said, “Okay, here’s a marshmallow. You sit here, the researcher is going to come back in 10 minutes, maybe 15 and if you haven’t eaten the first marshmallow you get a second one.” And then they studied these kids years later. And the ones who waited, who were able to have delayed gratification, were more successful in just about every way you can count. Educational achievement, income, whether they had a job at all, et cetera.

But the kids weren’t just sort of randomly, long-term or short-term. The ones that felt secure and who trusted their mother, who trusted — the kid’s not stupid; if he’s been living in a place where if they give you a marshmallow, grab it because you can’t trust what they say.

We’re raising a generation and a large population of people who don’t trust anybody and so they are going to grab—they’re going to think short-term. And the thing that scares me right now is how many kids are going to go through, whether it’s two weeks, two months, two years of living in these kinds of circumstances and having the kind of trauma that is going to make you into such a short-term thinker that you’re constantly on alert. You’re always fighting the current battle rather than thinking long-term. People need to be psychologically as well as intellectually ready to do that.

Watch Esther Dyson’s conversation with Peter Schwartz.

Watch Esther Dyson’s conversation with Ramez Naam.


David Eagleman is a world-renowned neuroscientist studying the structure of the brain, professor, entrepreneur, and author. He is also a Long Now Board member. Photograph: CNN.

The Neuroscience of the Unprecedented

David Eagleman

I’ve been thinking about this thing that I’m temporarily calling “the neuroscience of the unprecedented.” Because for all of us, in our lifetimes, this was unprecedented. And so the question is: what does that do to our brains? The funny part is it’s never been studied. In fact, I don’t even know if there’s an obvious way to study it in neuroscience. Nonetheless, I’ve been thinking a lot about this issue of our models of the world and how they get upended. And I think one of the things that I have noticed during the quarantine—and everybody I talked to has this feeling—is that it’s impossible to do long-term thinking while everything’s changing. And so I’ve started thinking about Maslow’s hierarchy of needs in terms of the time domain.

Here’s what I mean. If you have a good internal model of what’s happening and you understand how to do everything in the world, then it’s easy enough to think way into the distance. That’s like the top of the hierarchy, the top of the pyramid: everything is taken care of, all your physiologic needs, relationship needs, everything. When you’re at the top, you can think about the big picture: what kind of company you want to start, and why and where that goes, what that means for society and so on. When we’re in a time like this, where people are worried about if I don’t get that next Instacart delivery, I actually don’t have any food left, that kind of thing, it’s very hard to think long-term. When our internal models are our frayed, it’s hard to use those to make predictions about the future.

Watch David Eagleman’s conversation with Tiffany Shlain.

Watch David Eagleman’s conversation with Ping Fu.


The Virus as a Common Enemy and the Pandemic as a Whole Earth Event

Danny Hillis and Ping Fu

Danny Hillis: Do you think this is the first time the world has faced a problem, simultaneously, throughout the whole world?

Ping Fu: Well, how do you compare it to World War II? That was also simultaneous, although it didn’t impact every individual. In terms of something that touches every human being on Earth, this may be the first time.

Danny Hillis: Yeah. And also, we all are facing the same problem, whereas during wars, people face each other. They’re each other’s problem.

Ping Fu: I am worried we are making this virus an imaginary war.

Danny Hillis: Yes, I think that’s a problem. On the other hand, we are making it that, or at least our politicians are. But I don’t think people feel that they’re against other people. In fact, I think people realize, much like when they saw that picture of the whole earth, that there’s a lot of other people that are in the same boat they are.

Ping Fu: Well, I’m not saying that this particular imaginary war is necessarily negative, because in history we always see that when there is a common enemy, people get together, right? This feels like the first time the entire earth identified a common enemy, even though viruses have existed forever. We live with viruses all the time, but there was not a political social togetherness in identifying one virus as a common enemy of our humanity.

Danny Hillis: Does that permanently change us, because we all realize we’re facing a common enemy? I think we’ve had a common enemy before, but I don’t think it’s happened quickly enough, or people were aware of that enough. Certainly, one can imagine that global warming might have done that, but it happens so slowly. But this causes a lot of people to realize they’re in it together in real time, in a way, that I don’t think we’ve ever been before.

Ping Fu: When you talk about global warming, or clean air, clean water, it’s also a common enemy. It’s just more long term, so it’s not as urgent. And this one is now. That is making people react to it. But I’m hoping this will make us realize there are many other common enemies to the survival of humanity, so that we will deal with them in a more coherent or collaborative way.

Watch Ping Fu’s conversation with David Eagleman.

Watch Ping Fu’s conversation with Danny Hillis.

Watch Danny Hillis’ conversation with Geoffrey West.


Katherine Fulton is a philanthropist, strategist, author, who works for social change. She is also a Long Now Board member. Photograph: Christopher Michel.

An Opportunity for New Relationships and New Allies

Katherine Fulton

One of the things that fascinates me about this moment is that most big openings in history were not about one thing, they were about the way multiple things came together at the same time in surprising ways. And one of the things that happens at these moments is that it’s possible to think about new relationships and new allies.

For instance, a lot of small business people in this country are going to go out of business. And they’re going to be open to thinking about their future and what they do in the next business they start and who is their ally. In ways that don’t fit into any old ideological boxes I would guess.

When I look ahead at what these new institutions might be, I think they’re going to be hybrids. They’re going to bring people together to look at the cross-issue sectors or cross-business and nonprofit and cross-country. It’s going to bring people together in relationship that never would have been in relationship because you’ll need these new capabilities in different ways.

You often have a lot of social movement people who are very suspicious of business and historically very suspicious of technology — not so much now. So how might there be completely new partnerships? How might the tech companies, who are going to be massively empowered, the big tech companies by this, how might they invest in new kinds of partnerships and be much more enlightened in terms of creating the conditions in which their businesses will need to succeed?

So it seems to me we need a different kind of global institution than the ones that were invented after World War II.

Watch Katherine Fulton’s conversation with Ramez Naam.

Watch Katherine Fulton’s conversation with Kevin Kelly.


Kevin Kelly is Senior Maverick at Wired, a magazine he helped launch in 01993. He served as its Executive Editor from its inception until 01999. From 01984–01990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He is also a Long Now board member. Photograph: Christopher Michel.

The Loss of Consensus around Truth

Kevin Kelly

We’re in a moment of transition, accelerated by this virus, where we’ve gone from trusting in authorities to this postmodern world we have to assemble truth. This has put us in a position where all of us, myself included, have difficulty in figuring out, like, “Okay, there’s all these experts claiming expertise even among doctors, and there’s a little bit of contradictory information right now.”

Science works well at getting a consensus when you have time to check each other, to have peer review, to go through publications, to take the doubts and to retest. But it takes a lot of time. And in this fast-moving era where this virus isn’t waiting, we don’t have the luxury of having that scientific consensus. So we are having to turn, it’s like, “Well this guy thinks this, this person thinks this, she thinks that. I don’t know.”

We’re moving so fast that we’re moving ahead of the speed of science, even though science is itself accelerating. That results in this moment where we don’t know who to trust. Most of what everybody knows is true. But sometimes it isn’t. So we have a procedure to figure that out with what’s called science where we can have a consensus over time and then we agree. But if we are moving so fast and we have AI come in and we have viruses happening at this global scale, which speeds things up, then we’re outstripping our ability to know things. I think that may be with us longer than just this virus time.

We are figuring out a new way to know, and in what knowledge we can trust. Young people have to understand in a certain sense that they have to assemble what they believe in themselves; they can’t just inherit that from an authority. You actually have to have critical thinking skills, you actually have to understand that for every expert there’s an anti-expert over here and you have to work at trying to figure out which one you’re going to believe. I think, as a society, we are engaged in this process of coming up with an evolution in how we know things and where to place our trust. Maybe we can make some institutions and devices and technologies and social etiquettes and social norms to help us in this new environment of assembling truth.

Watch Kevin Kelly’s conversation with Katherine Fulton.

Watch Kevin Kelly’s conversation with Paul Saffo.


Ramez Naam holds a number of patents in technology and artificial intelligence and was involved in key product development at Microsoft. He was also CEO of Apex Nanotechnologies. His books include the Nexus trilogy of science fiction thrillers. Photograph: Phil Klein/Ramez Naam.

The Pandemic Won’t Help Us Solve Climate Change

Ramez Naam

There’s been a lot of conversations, op-eds, and Twitter threads about what coronavirus teaches us about climate change. And that it’s an example of the type of thinking that we need.

I’m not so optimistic. I still think we’re going to make enormous headway against climate change. We’re not on path for two degrees Celsius but I don’t think we’re on path for the four or six degrees Celsius you sometimes hear talked about. When I look at it, coronavirus is actually a much easier challenge for people to conceptualize. And I think of humans as hyperbolic discounters. We care far more about the very near term than we do about the long-term. And we discount that future at a very, very steep rate.

And so even with coronavirus — well, first the coronavirus got us these incredible carbon emissions and especially air quality changes. You see these pictures of New Delhi in India before and after, like a year ago versus this week. And it’s just brown haze and crystal clear blue skies, it’s just amazing.

But it’s my belief that when the restrictions are lifted, people are going to get back in their cars. And we still have billions of people that barely have access to electricity, to transportation to meet all their needs and so on. And so that tells me something: that even though we clearly can see the benefit of this, nevertheless people are going to make choices that optimize for their convenience, their whatnot, that have this effect on climate.

And that in turn tells me something else, which is: in the environmentalist movement, there’s a couple of different trains of thought of how we address climate change. And on the more far left of the deep green is the notion of de-growth, limiting economic growth, even reducing the size of the economy. And I think what we’re going to see after coronavirus will make it clear that that’s just not going to happen. That people are barely willing to be in lockdown for something that could kill them a couple of weeks from now. They’re going to be even less willing to do that for something that they perceive as a multi-decade threat.

And so the solution still has to be that we make clean choices, clean electricity, clean transportation, and clean industry, cheaper and better than the old dirty ones. That’s the way that we win. And that’s a hard story to tell people to some extent. But it is an area where we’re making progress.

Watch Ramez Naam’s conversation with Esther Dyson.

Watch Ramez Naam’s conversation with Katherine Fulton.


Alexander Rose is an industrial designer and has been working with The Long Now Foundation and computer scientist Danny Hillis since 01997 to build a monument scale, all mechanical 10,000 Year Clock. As the director of Long Now, Alexander founded The Interval and has facilitated a range of projects including The Organizational Continuity Project, The Rosetta Project, Long Bets, Seminars About Long Term Thinking, Long Server and others. Photograph: Christopher Michel.

The Lessons of Long-Lived Organizations

Alexander Rose

Any organization that has lasted for centuries has lived through multiple events like this. Any business that’s been around for just exactly 102 years, lived through the last one of these pandemics that was much more vast, much less understood, came through with much less communication.

I’m talking right now to heads of companies that have been around for several hundred years and in some cases—some of the better-run family ones and some of the corporate ones that have good records—and they’re pulling from those times. But even more important than pulling the exact strategic things that helped them survive those times, they’re able to tell the story that they survived to their own corporate or organizational culture, which is really powerful. It’s a different narrative than what you hear from our government and others who are largely trying in a way to get out from under the gun of saying this was a predictable event even though it was. They’re trying to say that this was a complete black swan, that we couldn’t have known it was going to happen.

There’s two problems with that. One, it discounts all of this previous prediction work and planning work that in some cases has been heeded by some cultures and some cases not. But I think more importantly, it gets people out of this narrative that we have survived, that we can survive, that things are going to come back to normal, that they can come back so far to normal that we are actually going to be bad at planning for the next one in a hundred years if we don’t put in new safeguards.

And I think it’s crucial to get that narrative back in to the story that we do survive these things. Things do go back to normal. We’re watching movies right now on Netflix where you watch people touch, and interact, and it just seems alien. But I think we will forget it quicker than we adopted it.

Watch Alexander Rose’s conversation with Bina Venkataraman.

Watch Alexander Rose’s conversation with Stewart Brand.


Paul Saffo is a forecaster with over two decades experience helping stakeholders understand and respond to the dynamics of large-scale, long-term change. Photograph: Vodafone.

How Do We Inoculate Against Human Folly?

Paul Saffo

I think, in general, yes, we’ve got to work more on long-term thinking. But the failure with this pandemic was not long-term thinking at all. The failure was action.

I think long-term thinking will become more common. The question is can we take that and turn it to action, and can we get the combination of the long-term look ahead, but also the fine grain of understanding when something really is about to change?

I think that this recent event is a demonstration that the whole futurist field has fundamentally failed. All those forecasts had no consequence. All it took was the unharmonic convergence of some short sighted politicians who had their throat around policy to completely unwind all the foresight and all the preparation. So thinking about what’s different in 50 or 100 or 500 years, I think that the fundamental challenge is how do we inoculate civilization against human folly?

This is the first of pandemics to come, and I think despite the horror that has happened, despite the tragic loss of life, that we’re going to look at this pandemic the way we looked at the ’89 earthquake in San Francisco, and recognize that it was a pretty big one. It wasn’t the big one in terms of virus lethality, it’s more a political pandemic in terms of the idiotic response. The highest thing that could come out of this is if we finally take public health seriously.

Watch Paul Saffo’s conversation with Kevin Kelly.

Watch Paul Saffo’s conversation with Tiffany Shlain.


Peter Schwartz is the Senior Vice President for Global Government Relations and Strategic Planning for Salesforce.com. Prior to that, Peter co-founded Global Business Network, a leader in scenario planning in 01988, where he served as chairman until 02011. Photograph: Christopher Michel.

How to Convince Those in Positions of Power to Trust Scenario Planning

Peter Schwartz

Look, I was a consultant in scenario planning, and I can tell you that it was never a way to get more business to tell a CEO, “Listen, I gave you the scenarios and you didn’t listen.”

My job was to make them listen. To find ways to engage them in such a way that they took it seriously. If they didn’t, it was my failure. Central to the process of thinking about the future like that is finding out how you engage the mind of the decision maker. Good scenario planning begins with a deep understanding of the people who actually have to use the scenarios. If you don’t understand that, you’re not going to have any impact.

[The way to make Trump take this pandemic more seriously would’ve been] to make him a hero early. That is, find a way to tell the story in such a way that Donald Trump in January, as you’re actually warning him, can be a hero to the American people, because of course that is what he wants in every interaction, right? This is a narcissist, so how do you make him be a leader in his narcissism from day one?

The honest truth is that that was part of the strategy with some CEOs that I’ve worked with in the past. I think Donald Trump is an easy person to understand in that respect; he’s very visible. The problem was that he couldn’t see any scenario in which he was a winner, and so he had to deny. You have to give him a route out, a route where he can win, and that’s what I think the folks around him didn’t give him.

Watch Peter Schwartz’s conversation with Bina Venkataraman.

Watch Peter Schwartz’s conversation with Esther Dyson.


Honored by Newsweek as one of the “Women Shaping the 21st Century,” Tiffany Shlain is an Emmy-nominated filmmaker, founder of The Webby Awards and author of 24/6: The Power of Unplugging One Day A Week. Photograph: Anitab.

The Power of Unplugging During the Pandemic

Tiffany Shlain

We’ve been doing this tech shabbat for 10 years now, unplugging on Friday night and having a day off the network. People ask me: “Can you unplug during the pandemic?” Not only can I, but it has been so important for Ken and I and the girls at a moment when there’s such a blur about time. We know all the news out there, we just need to stay inside and have a day to be together without the screens and actually reflect, which is what the Long Now is all about.

I have found these Tech Shabbats a thousand times more valuable for my health because I’m sleeping so well on Friday night. I just feel like I get perspective, which I think I’m losing during the week because it’s so much coming at me all the time. I think this concept of a day of rest has lasted for 3000 years for a reason. And what does that mean today and what does that mean in a pandemic? It means that you go off the screens and be with your family in an authentic way, be with yourself in an authentic way if you’re not married or with kids. Just take a moment to process. There’s a lot going on and it would be a missed opportunity if we don’t put our pen to paper, and I literally mean paper, to write down some of our thoughts right now in a different way. It’s so good to put your mind in a different way.

The reason I started doing Tech Shabbats in the first place is that I lost my father to brain cancer and Ken’s and my daughter was born within days. And it was one of those moments where I felt like life was grabbing me by the shoulders and saying, “Focus on what’s important.” And that series of dramatic events made me change the way I lived. And I feel like this moment that we’re in is the earth and life grabbing us all by the shoulders and saying, “Focus on what’s important. Look at the way you’re living.” And so I’m hopeful that this very intense, painful experience gets us thinking about how to live differently, about what’s important, about what matters. I’m hopeful that real behavioral change can come out of this very dramatic moment we’re in.

Watch Tiffany Shlain’s conversation with Paul Saffo.

Watch Tiffany Shlain’s conversation with David Eagleman.


Bina Venkataraman is the editorial page editor of The Boston Globe. Previously, she served as a senior adviser for climate change innovation in the Obama White House, was the director of Global Policy Initiatives at the Broad Institute of MIT and Harvard, and taught in the Program on Science, Technology, and Society at MIT. Photograph: Podchaser.

We Need a Longer Historical Memory

Bina Venkataraman

We see this pattern—it doesn’t even have to be over generations—that when a natural disaster happens in an area, an earthquake or a flood, we see spikes in people going out and buying insurance right after those events, when they’re still salient in people’s memory, when they’re afraid of those events happening again. But as the historical memory fades, as time goes on, people don’t buy that insurance. They forget these disasters.

I think historical memory is just one component of a larger gap between prediction and action, and what is missing in that gap is imagination. Historical memory is one way to revive the imagination about what’s possible. But I also think it’s beyond that, because with some of the events and threats that are predicted, they might not have perfect analogs in history. I think about climate change as one of those areas where we’ve just never actually had a historical event or anything that even approximates it in human experience, and so cognitively it’s really difficult for people, whether it’s everyday people in our lives or leaders, to be able to take those threats or opportunities seriously.

People think about the moon landing as this incredible feat of engineering, and of course it was, but before it was a feat of engineering, it was a feat of imagination. To accomplish something that’s unprecedented in human experience takes leaps of imagination, and they can come from different sources, from either the source of competition, from the knowledge of history, and indeed from technology, from story, from myth.

Watch Bina Venkataraman’s conversation with Alexander Rose.

Watch Bina Venkataraman’s conversation with Peter Schwartz.


Theoretical physicist Geoffrey West was president of Santa Fe Institute from 2005 to 2009 and founded the high energy physics group at Los Alamos National Laboratory. Photograph: Minesh Bacrania Photography.

The Pandemic is a Red Light for Future Planetary Threats

Geoffrey West

If you get sick as an individual, you take time off. You go to bed. You may end up in hospital, and so forth. And then hopefully you recover in a week, or two weeks, a few days. And of course, you have built into your life a kind of capacity, a reserve, that even though you’ve shut down — you’re not working, you’re not really thinking, probably — nevertheless you have that reserve. And then you recover, and there’s been sufficient reserve of energy, metabolism, finances, that it doesn’t affect you. Even if it is potentially a longer illness.

I’ve been trying to think of that as a metaphor for what’s happening to the planet. And of course what you realize is that how little we actually have in reserve, that it didn’t take very much in terms of us as a globe, as a planet going to bed for a few days, a few weeks, a couple months that we quickly ran out of our resources.

And some of the struggle is to try to come to terms with that, and to reestablish ourselves. And part of the reason for that, is of course that we are, as individuals, in a sort of meta stable state, whereas the planet is, even on these very short timescales, continually changing and evolving. And we live at a time where the socioeconomic forces are themselves exponentially expanding. And this is what causes the problem for us, the acceleration of time and the continual pressures that are part of the fabric of society.

We’re in a much more strong position than we would have been 100 years ago during the Flu Epidemic of 01918. I’m confident that we’re going to get through this and reestablish ourselves. But I see it as a red light that’s going on. A little rehearsal for some of the bigger questions that we’re going to have to face in terms of threats for the planet.

Watch Geoffrey West’s conversation with Danny Hillis.

Watch Geoffrey West’s conversation with Stewart Brand.


Footnotes

[1] Long Conversation is a relay conversation of 20 minute one-to-one conversations; each speaker has an un-scripted conversation with the speaker before them, and then speaks with the next participant before they themselves rotate off. This relay conversation format was first presented under the auspices of Artangel in London at a Longplayer performance in 02009.

,

Chaotic IdealismMiho

,

LongNowThe Cataclysm Sentence

WNYC’s Radiolab recently released a podcast about what forms of knowledge are worth passing on to future generations.

One day in 1961, the famous physicist Richard Feynman stepped in front of a Caltech lecture hall and posed this question to a group of undergraduate students: “If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence was passed on to the next generation of creatures, what statement would contain the most information in the fewest words?” Now, Feynman had an answer to his own question – a good one. But his question got the entire team at Radiolab wondering, what did his sentence leave out? So we posed Feynman’s cataclysm question to some of our favorite writers, artists, historians, futurists – all kinds of great thinkers. We asked them, “What’s the one sentence you would want to pass on to the next generation that would contain the most information in the fewest words?” What came back was an explosive collage of what it means to be alive right here and now, and what we want to say before we go.

The episode’s framing is very much in line with our Manual For Civilization project. A few Long Now Members and past speakers contributed answers to the project, including Alison Gopnik, Maria Popova, and James Gleick.

,

Chaotic IdealismObesity, COVID, and Statistical Observations

I have been watching some YouTube videos from doctors and scientists reviewing the latest research on COVID-19, and when they talk about the effect of comorbidities on disease severity and mortality in COVID-19, they often mention obesity. They do not seem entirely aware of the obesity rates in the US, and how they might affect the interpretation of studies done in the United States.

In the United States, 42.4% of all adults are obese (https://www.cdc.gov/obesity/index.html). Among a sample of people hospitalized for COVID in New York City, 41.7% were obese. These sorts of numbers are often cited as as a reason to think that obesity may be associated with more severe disease (requiring hospitalization), but notice the base rate: If people hospitalized for COVID have roughly the same level of obesity as people in general in the USA, then those numbers do not support the idea that obesity alone is a risk factor for severe disease in the US population.

This does not hold true for severe (morbid) obesity: The base rate for that is 9.2% in the USA, but the proportion of hospitalized COVID patients with severe obesity was 18%. (This was before controlling for comorbidities, which people with severe obesity usually have; the chicken-and-egg problem of whether they are fat because they are unhealthy, or unhealthy because they are fat, is something medicine is still working on.)

This implies that the number of obese, but not morbidly obese, people in the sample of those hospitalized for COVID should be 23.7%, compared to the 33.2% of mild-to-moderate obesity in the general population. If this difference is significant, as it should be with a sample of over five thousand, that actually supports the idea that obesity could be a protective factor, while morbid obesity is still a risk factor. (However: The paper did not address this idea, and I do not know if the difference is statistically significant; also, I do not have the obesity data for New York City and do not know if it is different from that of the general population.)

It might seem like a quirk of the data, but I think it is very important for us to notice, because if people in the overweight/obese range are worried about COVID and go on severe diets to try to lose weight and protect themselves, the low calorie intake may cause their bodies to slow their metabolisms, which it will do partly by reducing their immune response. People on severe diets may in fact become less resistant to the coronavirus because they are trying to lose weight.

A very gradual diet is probably still safe, but I have not studied what level of calorie restriction, in the absence of micronutrient deficiency, is likely to cause immunosuppression. Unless the goal of weight loss is to cure or better control some comorbidity that is associated with higher COVID death rates, it seems that until we know more, the best approach for many overweight and obese people is that of moderation and common sense: A varied, healthful diet without calorie restriction, combined with sunshine and exercise.

Reference:
Richardson S, Hirsch JS, Narasimhan M, et al. Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area. JAMA. Published online April 22, 2020. doi:10.1001/jama.2020.6775

,

LongNowKim Stanley Robinson: “The Coronavirus is Rewriting Our Imaginations.”

Science Fiction author Kim Stanley Robinson has written a powerful meditation on what the pandemic heralds for the future of civilization in The New Yorker.

Possibly, in a few months, we’ll return to some version of the old normal. But this spring won’t be forgotten. When later shocks strike global civilization, we’ll remember how we behaved this time, and how it worked. It’s not that the coronavirus is a dress rehearsal—it’s too deadly for that. But it is the first of many calamities that will likely unfold throughout this century. Now, when they come, we’ll be familiar with how they feel.

What shocks might be coming? Everyone knows everything. Remember when Cape Town almost ran out of water? It’s very likely that there will be more water shortages. And food shortages, electricity outages, devastating storms, droughts, floods. These are easy calls. They’re baked into the situation we’ve already created, in part by ignoring warnings that scientists have been issuing since the nineteen-sixties. Some shocks will be local, others regional, but many will be global, because, as this crisis shows, we are interconnected as a biosphere and a civilization.

Kim Stanley Robinson, “The Coronavirus is Rewriting Our Imaginations,” in The New Yorker.

Kim Stanley Robinson has spoken at Long Now on three occasions:

,

Sam VargheseUnder lockdown and it’s beginning to bore me

A second month under lockdown. Outside it looks bleak.

And the same applies to indoors.

Suddenly the simple act of taking one’s car to the petrol pump seems like a wonderful outing.

Going to the supermarket at 7am in order to get in at a time when the crowd is less is another feature of this lockdown.

But at times even those visits are a waste of time and one comes away without essentials.

When will it all end?

,

Chaotic IdealismUphill. Both ways. You kids got it easy.

I’m stranded at home, because I can’t go out. I can’t work. To survive, I fill out paperwork for the government, proving that I need food and shelter, constantly facing the default assumption that I am trying to cheat the system.

I look at the rest of the world, people who say they are going crazy because they can’t leave their homes, who are enraged and frustrated because they are having trouble with the unemployment office, because they’ve waited a week or two weeks to get benefits.

And I think: Well, now they know what it’s like. Because unlike the people who have been dealing with this for a month or two, this has been my reality for fifteen years now. I waited for six months for benefits. Many people wait two years. Driving is not possible for me, nor is public transportation readily available. The way people live in isolation now, short on money and housebound, is the way my life has been for over a decade.

I’m disabled, and that means I’m a second-class citizen. The world takes it for granted that I have to live like this. And, however much I wish I could wipe this virus from the face of the earth so it would never make anyone sad, scared, or lonely again, I find it vaguely satisfying to have people finally acknowledge that the way I have had to live is lonely, unjust, and frustrating.

Don’t get me wrong; I don’t hate my life, or even my disability. I find happiness and I’m satisfied. But there are annoying things in my life that nobody seems to recognize as annoying, that people seem to take for granted as being part of the experience of having a disability and thus unchangeable. But they are not unchangeable. They come from society being too inflexible to include us the way it should. When people are upset about the things I have had to deal with for years, that tells me that those things really are as unacceptable as I say they are, even if, normally, nobody seems to think so.

,

Chaotic IdealismWorried about India

I live in the United States, where things are getting pretty bad. But I keep thinking about India.

When I was a college student, I had Indian classmates–international students. They were the ones you could always count on to do their part in a group project; they were the ones setting the curve. Once you got past their accents, they were just as nice and personable as anybody else. And they had a passion for learning. I guess it stands to reason that if you are willing to go to a whole different country to go to college, you would have to be somebody who really cares about learning.

Now I can’t stop thinking about them–the laughing fellow who could make magic with circuit diagrams, the tiny short girl who’s probably a top-notch medical resident by now judging by how well she could explain anything in anatomy. I graduated years ago, so they must be home now.

I keep hearing about how India has a shortage of doctors, India has a shortage of hospitals, India has huge areas where you have to travel for ages to get medical care. About how they’re crowded in the cities, and isolated in the countryside, and have so many languages that sometimes it’s hard for doctors to even communicate with patients. They love technology, they love science, but they’re just getting started putting it in place. There are lots of places where you can’t even get Internet.

And here I am, in the United States, looking at New York City on the news and seeing Mumbai or New Delhi, and I just… feel so helpless.

I know none of you can do anything about it, any more than I can, but you’re good listeners. So thanks for listening. And if any of you are in India, please take care of yourselves and stay healthy.

,

Chaotic IdealismMasks 101

As you may know, the CDC recently recommended that, to slow the spread of coronavirus, people should wear masks–even cloth masks–when in public.

So here’s what you need to know.

Types of masks:

N95 respirators can protect you from breathing in the virus. They are only worn by hospital employees because they are scarce and need to go to the people who need them most.

Surgical masks will not protect you from breathing in the virus, but they will protect others from any virus that you may breathe out.

Cloth masks–homemade masks, bandanas, and other makeshift alternatives–also cannot protect you from breathing in the virus; but they will protect others around you nearly as well as a surgical mask can.

But I’m not sick. I don’t need to wear a mask.

This coronavirus plays a nasty trick: It will spread even if you don’t have symptoms. Some people, especially young, strong people, never have symptoms at all. Other people will spread the virus before they feel sick. Because we can’t know if we have the virus, it’s smart to wear a mask whether we feel sick or not. This is especially important if you are young and healthy: You are more likely to get the coronavirus without showing symptoms, but can still spread it to someone who is older or who has a a chronic illness.

I’m worried I’ll look silly.

The more people wear masks, the more normal it will become, and the more the people around you will decide to wear masks, too. If you don’t want to look clinical, make your mask out of bright colors; or use a bandana or scarf. Be confident–nobody questions somebody who looks like they know what they are doing!

When should I wear a mask?

Wear a mask anytime you go somewhere frequented by people outside your household.

Can I go outside without a mask?

Being outdoors without a mask is safe if you stay away from other people. In fact, spending time outdoors is very important–without sunshine, you can get Vitamin D deficiency, which can weaken your immune system.

Where do I get a mask?

If you can’t buy surgical masks, you can make your own. There are plenty of patterns on the Internet. Here are some patterns we are currently sewing to be used as backups for health-care workers: https://sewmasks4cincy.org/

If you can’t sew, contact someone who can. Wash home-made masks coming from another household before wearing them.

Mask Safety:

1. Don’t share masks with other people. One per person.

2. Know which side of your mask is the outside, and which is the inside. Never put the outside of the mask against your face. If your mask looks identical on both sides, mark it so you know which is which.

3. Wear the mask correctly. It needs to go over both your mouth and nose, and snugly fit your face so that you are breathing through it, not around it.

4. If using disposable masks, dispose of them after wearing them once. If you have a cloth mask or bandana, wash it in soap and hot water. Soap breaks apart the virus, killing it.

5. When you take off the mask, do so gently and without letting the outside of it make contact with your face. Don’t swing it around in the air; that could fling virus into your environment.

6. Masks don’t work as well if they get wet, either from your breath or from rain. If it’s wet, switch it out for a new one.

7. Don’t let wearing a mask make you overconfident. Your mask helps keep others safe, but you still need to wash your hands and keep your distance to protect yourself.

8. Tie your mask securely, so you aren’t reaching up to adjust it. Adjusting your mask means touching your face, and as you know, touching your face can transfer virus from your hands to your face. If you find you tend to mess with your mask, unlearn the habit before it gets established.

To those who can sew:

Please be willing to pitch in and help your neighbors. Don’t charge for the masks you make–nobody should have to go without one because they are choosing between that and their dinner. If you are low on cloth, request that people give you their pillowcases and sheets–100% cotton and tightly woven–for you to make into masks. Remember that any object you receive from another household must be washed before you work with it; so toss those sheets straight into the laundry. When you give a mask to someone else, put it into a ziplock bag, sanitize the outside of the bag, and instruct them to wash their mask when they receive it.

Where do you get your information?

This is information I have gathered from the WHO, CDC, and from medical doctors. Sew Masks 4 Cincy provides masks to hospitals, but acknowledges that home-made masks are inferior to manufactured ones and are to be used only in case of shortage.

Remember:

Protect your neighbors! Wear a mask!

,

Chaotic IdealismHow to Boost Your Immune System

There’s been a lot of really, really ridiculous pseudoscience on “boosting your immune system” out there. Some of it is harmless (Eat garlic! Take hot baths!) and some of it is potentially fatal (Fish-tank cleaner? Bleach? Rubbing alcohol? Seriously, people? Just don’t).

So as to combat this misinformation, I am posting here a list of things you can do–mostly enjoyable or at least not unpleasant–to boost your immune system. And they are all confirmed by actual, properly-designed studies, rather than by your eccentric aunt’s Facebook posts.

Sleep enough. If you’re not waking up before your alarm clock goes off, you probably aren’t sleeping enough.

Get a varied diet. That means carbs, fat, protein, plus fiber and micronutrients. Listen to your body–you often crave the sorts of things you most need. Multivitamins are useful to prevent malnutrition if you are eating a very restricted diet for whatever reason, but otherwise are unnecessary.

Spend time outdoors in the sunshine; if you are dark-skinned, consider Vitamin D supplements as well. Twenty minutes a day is fine. During the winter, even light-skinned people may want to take supplements.

This is not the time to go on a crash diet or overwork yourself (alas for our medical professionals and essential workers, who are doing exactly that and know exactly how dangerous it is, but can’t help it. We love you; we are rooting for you. Be brave).

Drink enough water–it helps keep your liver, kidneys, and digestive system happy.

Get some exercise, especially cardiovascular exercise. If you are new to it, take it slow. Don’t overdo it, whether you are starting out as an athlete or a couch potato. When your muscles are sore, let them rest. A walk outside is a good way to combine exercise and sunshine. The best sort of exercise is the sort you enjoy–Pokemon Go, anyone?

Alcohol is a bad idea in large quantities (one or two drinks is probably fine).

Consider stopping or reducing your smoking. Inhaling smoke, from whatever source, can paralyze the cilia in your lungs. (Okay, I said these were mostly going to be enjoyable or not unpleasant… here’s the inevitable exception. But you owe it to yourself to try. Besides, won’t you be so proud of yourself for stopping smoking if you manage it? It’s a seriously difficult thing to do, and an achievement that can be your own personal silver lining to the COVID epidemic.)

Consider the risks of pregnancy. If you are sexually active and pregnancy is a possibility, consider using condoms (or similar over-the-counter birth control), since pregnancy depresses the immune system somewhat to keep you/your partner from miscarrying (it also strengthens some aspects of the immune system to protect the fetus). If you are already pregnant, keep up with prenatal visits via telemedicine if at all possible.

Stay socially connected. I know they call it “social distancing”, but what it really is, is physical distancing. Your mind and heart need not keep six feet of distance. Also, pets can’t catch COVID–feel free to snuggle to your heart’s content. (If you and someone else are petting the same pet, be aware you could transfer germs that way, so wash the pet, wash your hands, or both.) You extreme introverts out there: Stay engaged with the minds of other people. Read, watch videos, write letters, listen to music, look at art. Your goal is to keep yourself connected into the web of human ideas.

[5/6 Update re. pets and COVID: First some big cats in a zoo, and then several domestic cats, have now tested positive and showed symptoms. The domestic cats were infected in an experimental context, but it implies that cats can catch the virus. What this changes is that if you have a cat in your household, your cat needs to be included in your social-distancing plans. Cats should stay inside and have contact only with household members. Since we would long ago have noticed if COVID could severely affect cats–or even easily infect them–it is highly likely that COVID is not a risk to cats unless they are already very frail or very young. But the small risk of your cat carrying the virus to someone else means that cats should be kept inside, and the precautions you take with household members should also apply to cats. Dogs are still questionable, but at this point, I would recommend including dogs as well–if only because, since dogs usually live with the family or in a fenced-in area, you don’t need to change very much other than not letting non-household members pet your dog. If your cat insists on going outside, consider a leash and harness.]

Set up a reliable schedule, with meaningful things to do. This helps fight anxiety and a sense of purposelessness, both of which contribute to the loss of morale that is associated (yes, scientifically), with a lowered immune response.

Do things you find engaging, interesting, and fun. Once again, feeling purposeless and anxious will stress you out. That’s right: I am literally prescribing video games, trashy novels, and guilty-pleasure TV shows as a preventative for COVID. Enjoy. And stay healthy!

,

Chaotic IdealismChanges

I’m not too mobile at the best of times. Oh, sure, I can walk, no problem; but I can’t drive–and probably shouldn’t, since my reaction time and my ability to make decisions quickly are much impaired. (I have often thought that if there were nothing and nobody but me on the road, I should be a perfect driver. Alas, that is not the case.) So I’m limited to a few miles around where I live, and to wherever a friend or a taxi will drive me when I’m not in danger of wearing out my welcome or my funding.

American life takes driving for granted. People live so far away from their jobs that visiting home would have been, before the car, a yearly journey undertaken with much preparation. And like most disabled people living in a world of the temporarily-abled, I find that my need to have things closer, or to be allowed to do them at a distance, is considered an overly demanding request.

Enter the coronavirus. Suddenly, everything is wonderfully, blessedly accessible. The volunteer work I do now, I do from home. Nobody minds when I call instead of visiting. Even my food stamp interview this year is by phone. And the ADAPT meeting I had wanted to attend but knew I probably wouldn’t be able to attend (irony of ironies) due to my lack of easy commuting is now being held online.

It’s not just the commuting that has changed. Now, people are actually–for the most part, barring a few swaggering jackasses who think they can taunt a non-sapient virus–staying home when they’re sick, washing their hands, and not sharing water bottles and utensils. As I’ve mentioned before, I have asthma, and a cold can cause a flare-up, so I try to avoid them–something which, before coronavirus, got me branded a germophobe of the highest degree. Now that these behaviors are useful to the general, non-asthmatic public, they’re finally becoming the norm.

Isn’t that the way it always is? Disabled people can beg over and over for a simple accommodation. And we’re told, “No; that’s too much. No, that’s too difficult. We’d have to change things. We’d have to adjust everything. Be grateful for the curb cuts and the IEPs; aren’t we being saintly enough just tolerating your presence?”

And then, suddenly, non-disabled people find that something that disabled people are begging for is useful to them. They discover they don’t like climbing stairs; suddenly, elevators are everywhere. They like to watch TV with the sound off; now everything is closed-captioned. They want hands-free computing; voice recognition becomes a standard feature–not because disabled people desperately need it, but because non-disabled people find it useful to them.

I don’t want to disparage our allies. There are a good many decent people who think it’s ridiculous that some people are second-class citizens because we do things differently or lack some ability society has judged essential.

But society at large–both specific people and the culture we live in–seems to think that because something is not useful to most people, it must of necessity be an unreasonable demand to make. This is one of the foundations of ablism in our society, and needs to be directly addressed. If we want our communities to become truly good, they simply cannot leave anyone behind.

,

Sam VargheseThe sound of silence

Some months, one has nothing to say.

The coronavirus pandemic may be a cotributory factor. Or not.

One is really unclear about the reason.

,

Chaotic IdealismQuiet

It is March 27th of the Year of the Coronavirus, and we are holding on.

Upstairs, B puts her guinea pigs in their playpen and chops vegetables for them while they wheek impatiently. Her students wait on the other end of her laptop; she teaches them remotely. Last week, the fifth-graders were nervous and restless; this week, they have settled into new routines.

B and I go on walks in the sunshine when we can. I lecture about feline genetics. She talks about literature. I walk too fast, and get lost in our own neighborhood. She knows the way, from going jogging.

Downstairs, I make masks. Cincinnati isn’t yet in the full force of the storm, but we are worried our medical personnel will run out of personal protective equipment. We don’t know how much a homemade mask can do to help; there’s no science on it, just a lot of theories, some of which say, ominously, that they are worse than nothing.

But the CDC has recommended that if they run out of masks entirely, our health care workers can try using bandanas. Not if we can help it, they won’t. So we make mask covers, meant to go over a proper N95 mask, to extend its lifespan when they have to wear the same one day in and day out, as nurses are already doing in New York; and we make filter covers with a pocket for the hospital to put filter materials into; and we hope they will never be needed.

M is still going to work. He works for a construction company, and his boss at first said that he could work from home. But now M’s boss keeps sending him to work sites, into the customers’ homes. M is young and healthy; his boss is within ten years of retirement. “If he wants to catch the Boomer Remover, let him,” says M, cynical as always. Then he takes a load of thirty-one homemade masks to a nearby health clinic for me, and offers to let me dip into his pantry if my own supplies run out.

I’m reminded of Louisiana, after Hurricane Katrina. People help each other now as naturally as breathing; nothing else is conceivable. Greetings, from six feet away, are, “How are you holding up?”; and goodbyes are, “Stay healthy!” The social chaos of apocalypse has not come; our town is peaceful, quiet, and tense. It occurs to me that whenever I read about historical epidemics, there’s very little about what happens in the houses before the sickness comes, or to the ones where it never strikes at all.

This is what happens: We go on with our lives. M reminds himself not to touch his face as he drives down empty roads to another customer’s house. B and her children read a book about a 13-year-old Revolutionary War soldier. And I stay in my basement, making mask after mask after mask.